scholarly journals Quantifying influence of human choice on the automated detection of Drosophila behavior by a supervised machine learning algorithm

PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0241696
Author(s):  
Xubo Leng ◽  
Margot Wohl ◽  
Kenichi Ishii ◽  
Pavan Nayak ◽  
Kenta Asahina

Automated quantification of behavior is increasingly prevalent in neuroscience research. Human judgments can influence machine-learning-based behavior classification at multiple steps in the process, for both supervised and unsupervised approaches. Such steps include the design of the algorithm for machine learning, the methods used for animal tracking, the choice of training images, and the benchmarking of classification outcomes. However, how these design choices contribute to the interpretation of automated behavioral classifications has not been extensively characterized. Here, we quantify the effects of experimenter choices on the outputs of automated classifiers of Drosophila social behaviors. Drosophila behaviors contain a considerable degree of variability, which was reflected in the confidence levels associated with both human and computer classifications. We found that a diversity of sex combinations and tracking features was important for robust performance of the automated classifiers. In particular, features concerning the relative position of flies contained useful information for training a machine-learning algorithm. These observations shed light on the importance of human influence on tracking algorithms, the selection of training images, and the quality of annotated sample images used to benchmark the performance of a classifier (the ‘ground truth’). Evaluation of these factors is necessary for researchers to accurately interpret behavioral data quantified by a machine-learning algorithm and to further improve automated classifications.

2020 ◽  
Author(s):  
Xubo Leng ◽  
Margot Wohl ◽  
Kenichi Ishii ◽  
Pavan Nayak ◽  
Kenta Asahina

AbstractAutomated quantification of behavior is increasingly prevalent in neuroscience research. Human judgments can influence machine-learning-based behavior classification at multiple steps in the process, for both supervised and unsupervised approaches. Such steps include the design of the algorithm for machine learning, the methods used for animal tracking, the choice of training images, and the benchmarking of classification outcomes. However, how these design choices contribute to the interpretation of automated behavioral classifications has not been extensively characterized. Here, we quantify the effects of experimenter choices on the outputs of automated classifiers of Drosophila social behaviors. Drosophila behaviors contain a considerable degree of variability, which was reflected in the confidence levels associated with both human and computer classifications. We found that a diversity of sex combinations and tracking features was important for robust performance of the automated classifiers. In particular, features concerning the relative position of flies contained useful information for training a machine-learning algorithm. These observations shed light on the importance of human influence on tracking algorithms, the selection of training images, and the quality of annotated sample images used to benchmark the performance of a classifier (the ‘ground truth’). Evaluation of these factors is necessary for researchers to accurately interpret behavioral data quantified by a machine-learning algorithm and to further improve automated classifications.Significance StatementAccurate quantification of animal behaviors is fundamental to neuroscience. Here, we quantitatively assess how human choices influence the performance of automated classifiers trained by a machine-learning algorithm. We found that human decisions about the computational tracking method, the training images, and the images used for performance evaluation impact both the classifier outputs and how human observers interpret the results. These factors are sometimes overlooked but are critical, especially because animal behavior is itself inherently variable. Automated quantification of animal behavior is becoming increasingly prevalent: our results provide a model for bridging the gap between traditional human annotations and computer-based annotations. Systematic assessment of human choices is important for developing behavior classifiers that perform robustly in a variety of experimental conditions.


2020 ◽  
Vol 23 ◽  
pp. S1
Author(s):  
S. Pandey ◽  
A. Sharma ◽  
M.K. Siddiqui ◽  
D. Singla ◽  
J. Vanderpuye-Orgle

Sign in / Sign up

Export Citation Format

Share Document