Few-shot classification without forgetting of event-camera data

2021 ◽  
Author(s):  
Anik Goyal ◽  
Soma Biswas
Author(s):  
Igor' Latyshov ◽  
Fedor Samuylenko

In this research, there was considered a challenge of constructing a system of scientific knowledge of the shot conditions in judicial ballistics. It was observed that there are underlying factors that are intended to ensureits [scientific knowledge] consistency: identification of the list of shot conditions, which require consideration when solving expert-level research tasks on weapons, cartridges and traces of their action; determination of the communication systems in the course of objects’ interaction, which present the result of exposure to the conditions of the shot; classification of the shot conditions based on the grounds significant for solving scientific and practical problems. The article contains the characteristics of a constructive, functional factor (condition) of weapons and cartridges influence, environmental and fire factors, the structure of the target and its physical properties, situational and spatial factors, and projectile energy characteristics. Highlighted are the forms of connections formed in the course of objects’ interaction, proposed are the author’s classifications of forensically significant shooting conditions with them being divided on the basis of the following criteria: production from the object of interaction, production from a natural phenomenon, production method, results weapon operation and utilization, duration of exposure, type of structural connections between interaction objects, number of conditions that apply when firing and the forming traces.


AI ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 195-208
Author(s):  
Gabriel Dahia ◽  
Maurício Pamplona Segundo

We propose a method that can perform one-class classification given only a small number of examples from the target class and none from the others. We formulate the learning of meaningful features for one-class classification as a meta-learning problem in which the meta-training stage repeatedly simulates one-class classification, using the classification loss of the chosen algorithm to learn a feature representation. To learn these representations, we require only multiclass data from similar tasks. We show how the Support Vector Data Description method can be used with our method, and also propose a simpler variant based on Prototypical Networks that obtains comparable performance, indicating that learning feature representations directly from data may be more important than which one-class algorithm we choose. We validate our approach by adapting few-shot classification datasets to the few-shot one-class classification scenario, obtaining similar results to the state-of-the-art of traditional one-class classification, and that improves upon that of one-class classification baselines employed in the few-shot setting.


Author(s):  
Mathias Gehrig ◽  
Willem Aarents ◽  
Daniel Gehrig ◽  
Davide Scaramuzza
Keyword(s):  

2021 ◽  
Author(s):  
Yuan-Chia Cheng ◽  
Ci-Siang Lin ◽  
Fu-En Yang ◽  
Yu-Chiang Frank Wang

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1137
Author(s):  
Ondřej Holešovský ◽  
Radoslav Škoviera ◽  
Václav Hlaváč ◽  
Roman Vítek

We compare event-cameras with fast (global shutter) frame-cameras experimentally, asking: “What is the application domain, in which an event-camera surpasses a fast frame-camera?” Surprisingly, finding the answer has been difficult. Our methodology was to test event- and frame-cameras on generic computer vision tasks where event-camera advantages should manifest. We used two methods: (1) a controlled, cheap, and easily reproducible experiment (observing a marker on a rotating disk at varying speeds); (2) selecting one challenging practical ballistic experiment (observing a flying bullet having a ground truth provided by an ultra-high-speed expensive frame-camera). The experimental results include sampling/detection rates and position estimation errors as functions of illuminance and motion speed; and the minimum pixel latency of two commercial state-of-the-art event-cameras (ATIS, DVS240). Event-cameras respond more slowly to positive than to negative large and sudden contrast changes. They outperformed a frame-camera in bandwidth efficiency in all our experiments. Both camera types provide comparable position estimation accuracy. The better event-camera was limited by pixel latency when tracking small objects, resulting in motion blur effects. Sensor bandwidth limited the event-camera in object recognition. However, future generations of event-cameras might alleviate bandwidth limitations.


2021 ◽  
Author(s):  
Zehao Chen ◽  
Qian Zheng ◽  
Peisong Niu ◽  
Huajin Tang ◽  
Gang Pan

Sign in / Sign up

Export Citation Format

Share Document