Exterior and interior shot classification for automatic content-based video indexing

Author(s):  
Mohsen Ardebilian Fard ◽  
Walid Mahdi ◽  
Liming Chen
2001 ◽  
Vol 32 (9) ◽  
pp. 32-41 ◽  
Author(s):  
Ichiro Ide ◽  
Koji Yamamoto ◽  
Reiko Hamada ◽  
Hidehiko Tanaka

Author(s):  
Igor' Latyshov ◽  
Fedor Samuylenko

In this research, there was considered a challenge of constructing a system of scientific knowledge of the shot conditions in judicial ballistics. It was observed that there are underlying factors that are intended to ensureits [scientific knowledge] consistency: identification of the list of shot conditions, which require consideration when solving expert-level research tasks on weapons, cartridges and traces of their action; determination of the communication systems in the course of objects’ interaction, which present the result of exposure to the conditions of the shot; classification of the shot conditions based on the grounds significant for solving scientific and practical problems. The article contains the characteristics of a constructive, functional factor (condition) of weapons and cartridges influence, environmental and fire factors, the structure of the target and its physical properties, situational and spatial factors, and projectile energy characteristics. Highlighted are the forms of connections formed in the course of objects’ interaction, proposed are the author’s classifications of forensically significant shooting conditions with them being divided on the basis of the following criteria: production from the object of interaction, production from a natural phenomenon, production method, results weapon operation and utilization, duration of exposure, type of structural connections between interaction objects, number of conditions that apply when firing and the forming traces.


1996 ◽  
Author(s):  
Vikrant Kobla ◽  
David Doermann ◽  
King-Ip Lin ◽  
Christos Faloutsos

1999 ◽  
Vol 28 (1) ◽  
pp. 32-39 ◽  
Author(s):  
Arun Hampapur
Keyword(s):  

AI ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 195-208
Author(s):  
Gabriel Dahia ◽  
Maurício Pamplona Segundo

We propose a method that can perform one-class classification given only a small number of examples from the target class and none from the others. We formulate the learning of meaningful features for one-class classification as a meta-learning problem in which the meta-training stage repeatedly simulates one-class classification, using the classification loss of the chosen algorithm to learn a feature representation. To learn these representations, we require only multiclass data from similar tasks. We show how the Support Vector Data Description method can be used with our method, and also propose a simpler variant based on Prototypical Networks that obtains comparable performance, indicating that learning feature representations directly from data may be more important than which one-class algorithm we choose. We validate our approach by adapting few-shot classification datasets to the few-shot one-class classification scenario, obtaining similar results to the state-of-the-art of traditional one-class classification, and that improves upon that of one-class classification baselines employed in the few-shot setting.


Author(s):  
Daichi Kitaguchi ◽  
Nobuyoshi Takeshita ◽  
Hiroki Matsuzaki ◽  
Hiro Hasegawa ◽  
Takahiro Igaki ◽  
...  

Abstract Background Dividing a surgical procedure into a sequence of identifiable and meaningful steps facilitates intraoperative video data acquisition and storage. These efforts are especially valuable for technically challenging procedures that require intraoperative video analysis, such as transanal total mesorectal excision (TaTME); however, manual video indexing is time-consuming. Thus, in this study, we constructed an annotated video dataset for TaTME with surgical step information and evaluated the performance of a deep learning model in recognizing the surgical steps in TaTME. Methods This was a single-institutional retrospective feasibility study. All TaTME intraoperative videos were divided into frames. Each frame was manually annotated as one of the following major steps: (1) purse-string closure; (2) full thickness transection of the rectal wall; (3) down-to-up dissection; (4) dissection after rendezvous; and (5) purse-string suture for stapled anastomosis. Steps 3 and 4 were each further classified into four sub-steps, specifically, for dissection of the anterior, posterior, right, and left planes. A convolutional neural network-based deep learning model, Xception, was utilized for the surgical step classification task. Results Our dataset containing 50 TaTME videos was randomly divided into two subsets for training and testing with 40 and 10 videos, respectively. The overall accuracy obtained for all classification steps was 93.2%. By contrast, when sub-step classification was included in the performance analysis, a mean accuracy (± standard deviation) of 78% (± 5%), with a maximum accuracy of 85%, was obtained. Conclusions To the best of our knowledge, this is the first study based on automatic surgical step classification for TaTME. Our deep learning model self-learned and recognized the classification steps in TaTME videos with high accuracy after training. Thus, our model can be applied to a system for intraoperative guidance or for postoperative video indexing and analysis in TaTME procedures.


2021 ◽  
Author(s):  
Yuan-Chia Cheng ◽  
Ci-Siang Lin ◽  
Fu-En Yang ◽  
Yu-Chiang Frank Wang

Sign in / Sign up

Export Citation Format

Share Document