finFindR : Automated recognition and identification of marine mammal dorsal fins using residual convolutional neural networks

2021 ◽  
Author(s):  
Jaime W. Thompson ◽  
Victoria H. Zero ◽  
Lori H. Schwacke ◽  
Todd R. Speakman ◽  
Brian M. Quigley ◽  
...  
Author(s):  
O.N. Korsun ◽  
V.N. Yurko

We analysed two approaches to estimating the state of a human operator according to video imaging of the face. These approaches, both using deep convolutional neural networks, are as follows: 1) automated emotion recognition; 2) analysis of blinking characteristics. The study involved assessing changes in the functional state of a human operator performing a manual landing in a flight simulator. During this process, flight parameters were recorded, and the operator’s face was filmed. Then we used our custom software to perform automated recognition of emotions (blinking), synchronising the emotions (blinking) recognised to the flight parameters recorded. As a result, we detected persistent patterns linking the operator fatigue level to the number of emotions recognised by the neural network. The type of emotion depends on unique psychological characteristics of the operator. Our experiments allow for easily tracing these links when analysing the emotions of "Sadness", "Fear" and "Anger". The study revealed a correlation between blinking properties and piloting accuracy. A higher piloting accuracy meant more blinks recorded, which may be explained by a stable psycho-physiological state leading to confident piloting


2021 ◽  
Vol 2021 (1) ◽  
pp. 85-106
Author(s):  
Arezoo Rajabi ◽  
Rakesh B. Bobba ◽  
Mike Rosulek ◽  
Charles V. Wright ◽  
Wu-chi Feng

AbstractImage hosting platforms are a popular way to store and share images with family members and friends. However, such platforms typically have full access to images raising privacy concerns. These concerns are further exacerbated with the advent of Convolutional Neural Networks (CNNs) that can be trained on available images to automatically detect and recognize faces with high accuracy.Recently, adversarial perturbations have been proposed as a potential defense against automated recognition and classification of images by CNNs. In this paper, we explore the practicality of adversarial perturbation-based approaches as a privacy defense against automated face recognition. Specifically, we first identify practical requirements for such approaches and then propose two practical adversarial perturbation approaches – (i) learned universal ensemble perturbations (UEP), and (ii) k-randomized transparent image overlays (k-RTIO) that are semantic adversarial perturbations. We demonstrate how users can generate effective transferable perturbations under realistic assumptions with less effort.We evaluate the proposed methods against state-of-theart online and offline face recognition models, Clarifai.com and DeepFace, respectively. Our findings show that UEP and k-RTIO respectively achieve more than 85% and 90% success against face recognition models. Additionally, we explore potential countermeasures that classifiers can use to thwart the proposed defenses. Particularly, we demonstrate one effective countermeasure against UEP.


2020 ◽  
Vol 140 ◽  
pp. 104498 ◽  
Author(s):  
Benjamin Bourel ◽  
Ross Marchant ◽  
Thibault de Garidel-Thoron ◽  
Martin Tetard ◽  
Doris Barboni ◽  
...  

2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document