scholarly journals LiftPose3D, a deep learning-based approach for transforming 2D to 3D pose in laboratory animals

2020 ◽  
Author(s):  
Adam Gosztolai ◽  
Semih Günel ◽  
Marco Pietro Abrate ◽  
Daniel Morales ◽  
Victor Lobato Ríos ◽  
...  

AbstractMarkerless 3D pose estimation has become an indispensable tool for kinematic studies of laboratory animals. Most current methods recover 3D pose by multi-view triangulation of deep network-based 2D pose estimates. However, triangulation requires multiple, synchronised cameras per keypoint and elaborate calibration protocols that hinder its widespread adoption in laboratory studies. Here, we describe LiftPose3D, a deep network-based method that overcomes these barriers by reconstructing 3D poses from a single 2D camera view. We illustrate LiftPose3D’s versatility by applying it to multiple experimental systems using flies, mice, and macaque monkeys and in circumstances where 3D triangulation is impractical or impossible. Thus, LiftPose3D permits high-quality 3D pose estimation in the absence of complex camera arrays, tedious calibration procedures, and despite occluded keypoints in freely behaving animals.

Author(s):  
Cristina Segalin ◽  
Jalani Williams ◽  
Tomomi Karigo ◽  
May Hui ◽  
Moriel Zelikowsky ◽  
...  

AbstractThe study of social behavior requires scoring the animals’ interactions. This is generally done by hand— a time consuming, subjective, and expensive process. Recent advances in computer vision enable tracking the pose (posture) of freely-behaving laboratory animals automatically. However, classifying complex social behaviors such as mounting and attack remains technically challenging. Furthermore, the extent to which expert annotators, possibly from different labs, agree on the definitions of these behaviors varies. There is a shortage in the neuroscience community of benchmark datasets that can be used to evaluate the performance and reliability of both pose estimation tools and manual and automated behavior scoring.We introduce the Mouse Action Recognition System (MARS), an automated pipeline for pose estimation and behavior quantification in pairs of freely behaving mice. We compare MARS’s annotations to human annotations and find that MARS’s pose estimation and behavior classification achieve human-level performance. As a by-product we characterize the inter-expert variability in behavior scoring. The two novel datasets used to train MARS were collected from ongoing experiments in social behavior, and identify the main sources of disagreement between annotators. They comprise 30,000 frames of manual annotated mouse poses and over 14 hours of manually annotated behavioral recordings in a variety of experimental preparations. We are releasing this dataset alongside MARS to serve as community benchmarks for pose and behavior systems. Finally, we introduce the Behavior Ensemble and Neural Trajectory Observatory (Bento), a graphical interface that allows users to quickly browse, annotate, and analyze datasets including behavior videos, pose estimates, behavior annotations, audio, and neural recording data. We demonstrate the utility of MARS and Bento in two use cases: a high-throughput behavioral phenotyping study, and exploration of a novel imaging dataset. Together, MARS and Bento provide an end-to-end pipeline for behavior data extraction and analysis, in a package that is user-friendly and easily modifiable.


2021 ◽  
Author(s):  
Jesse D Marshall ◽  
Ugne Klibaite ◽  
Amanda J Gellis ◽  
Diego E Aldarondo ◽  
Bence P Olveczky ◽  
...  

Understanding the biological basis of social and collective behaviors in animals is a key goal of the life sciences, and may yield important insights for engineering intelligent multi-agent systems. A critical step in interrogating the mechanisms underlying social behaviors is a precise readout of the 3D pose of interacting animals. While approaches for multi-animal pose estimation are beginning to emerge, they remain challenging to compare due to the lack of standardized training and benchmark datasets. Here we introduce the PAIR-R24M (Paired Acquisition of Interacting oRganisms - Rat) dataset for multi-animal 3D pose estimation, which contains 24.3 million frames of RGB video and 3D ground-truth motion capture of dyadic interactions in laboratory rats. PAIR-R24M contains data from 18 distinct pairs of rats and 24 different viewpoints. We annotated the data with 11 behavioral labels and 3 interaction categories to facilitate benchmarking in rare but challenging behaviors. To establish a baseline for markerless multi-animal 3D pose estimation, we developed a multi-animal extension of DANNCE, a recently published network for 3D pose estimation in freely behaving laboratory animals. As the first large multi-animal 3D pose estimation dataset, PAIR-R24M will help advance 3D animal tracking approaches and aid in elucidating the neural basis of social behaviors.


Author(s):  
Jun Liu ◽  
Henghui Ding ◽  
Amir Shahroudy ◽  
Ling-Yu Duan ◽  
Xudong Jiang ◽  
...  

2019 ◽  
Vol 5 (1) ◽  
pp. 9-12
Author(s):  
Jyothsna Kondragunta ◽  
Christian Wiede ◽  
Gangolf Hirtz

AbstractBetter handling of neurological or neurodegenerative disorders such as Parkinson’s Disease (PD) is only possible with an early identification of relevant symptoms. Although the entire disease can’t be treated but the effects of the disease can be delayed with proper care and treatment. Due to this fact, early identification of symptoms for the PD plays a key role. Recent studies state that gait abnormalities are clearly evident while performing dual cognitive tasks by people suffering with PD. Researches also proved that the early identification of the abnormal gaits leads to the identification of PD in advance. Novel technologies provide many options for the identification and analysis of human gait. These technologies can be broadly classified as wearable and non-wearable technologies. As PD is more prominent in elderly people, wearable sensors may hinder the natural persons movement and is considered out of scope of this paper. Non-wearable technologies especially Image Processing (IP) approaches captures data of the person’s gait through optic sensors Existing IP approaches which perform gait analysis is restricted with the parameters such as angle of view, background and occlusions due to objects or due to own body movements. Till date there exists no researcher in terms of analyzing gait through 3D pose estimation. As deep leaning has proven efficient in 2D pose estimation, we propose an 3D pose estimation along with proper dataset. This paper outlines the advantages and disadvantages of the state-of-the-art methods in application of gait analysis for early PD identification. Furthermore, the importance of extracting the gait parameters from 3D pose estimation using deep learning is outlined.


Author(s):  
Junting Dong ◽  
Qi Fang ◽  
Wen Jiang ◽  
Yurou Yang ◽  
Qixing Huang ◽  
...  

2021 ◽  
Author(s):  
Artur Schneider ◽  
Christian Zimmermann ◽  
Mansour Alyahyay ◽  
Thomas Brox ◽  
Ilka Diester

2021 ◽  
Author(s):  
Minghao Wang ◽  
Long Ye ◽  
Fei Hu ◽  
Li Fang ◽  
Wei Zhong ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document