scholarly journals DeepLabStream enables closed-loop behavioral experiments using deep learning-based markerless, real-time posture detection

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Jens F. Schweihoff ◽  
Matvey Loshakov ◽  
Irina Pavlova ◽  
Laura Kück ◽  
Laura A. Ewell ◽  
...  

AbstractIn general, animal behavior can be described as the neuronal-driven sequence of reoccurring postures through time. Most of the available current technologies focus on offline pose estimation with high spatiotemporal resolution. However, to correlate behavior with neuronal activity it is often necessary to detect and react online to behavioral expressions. Here we present DeepLabStream, a versatile closed-loop tool providing real-time pose estimation to deliver posture dependent stimulations. DeepLabStream has a temporal resolution in the millisecond range, can utilize different input, as well as output devices and can be tailored to multiple experimental designs. We employ DeepLabStream to semi-autonomously run a second-order olfactory conditioning task with freely moving mice and optogenetically label neuronal ensembles active during specific head directions.

Author(s):  
Jens F. Schweihoff ◽  
Matvey Loshakov ◽  
Irina Pavlova ◽  
Laura Kück ◽  
Laura A. Ewell ◽  
...  

AbstractIn general, animal behavior can be described as the neuronal-driven sequence of reoccurring postures through time. Current technologies enable offline pose estimation with high spatio-temporal resolution, however to understand complex behaviors, it is necessary to correlate the behavior with neuronal activity in real-time. Here we present DeepLabStream, a highly versatile, closed-loop solution for freely moving mice that can autonomously conduct behavioral experiments ranging from behavior-based learning tasks to posture-dependent optogenetic stimulation. DeepLabStream has a temporal resolution in the millisecond range, can operate with multiple devices and can be easily tailored to a wide range of species and experimental designs. We employ DeepLabStream to autonomously run a second-order olfactory conditioning task for freely moving mice and to deliver optogenetic stimuli based on mouse head-direction.


2020 ◽  
Author(s):  
Markus Marks ◽  
Jin Qiuhan ◽  
Oliver Sturman ◽  
Lukas von Ziegler ◽  
Sepp Kollmorgen ◽  
...  

ABSTRACTAnalysing the behavior of individuals or groups of animals in complex environments is an important, yet difficult computer vision task. Here we present a novel deep learning architecture for classifying animal behavior and demonstrate how this end-to-end approach can significantly outperform pose estimation-based approaches, whilst requiring no intervention after minimal training. Our behavioral classifier is embedded in a first-of-its-kind pipeline (SIPEC) which performs segmentation, identification, pose-estimation and classification of behavior all automatically. SIPEC successfully recognizes multiple behaviors of freely moving mice as well as socially interacting nonhuman primates in 3D, using data only from simple mono-vision cameras in home-cage setups.


2019 ◽  
Author(s):  
Zach Werkhoven ◽  
Christian Rohrsen ◽  
Chuan Qin ◽  
Björn Brembs ◽  
Benjamin de Bivort

AbstractFast object tracking in real time allows convenient tracking of very large numbers of animals and closed-loop experiments that control stimuli for multiple animals in parallel. We developed MARGO, a real-time animal tracking suite for custom behavioral experiments. We demonstrated that MARGO can rapidly and accurately track large numbers of animals in parallel over very long timescales. We incorporated control of peripheral hardware, and implemented a flexible software architecture for defining new experimental routines. These features enable closed-loop delivery of stimuli to many individuals simultaneously. We highlight MARGO’s ability to coordinate tracking and hardware control with two custom behavioral assays (measuring phototaxis and optomotor response) and one optogenetic operant conditioning assay. There are currently several open source animal trackers. MARGO’s strengths are 1) robustness, 2) high throughput, 3) flexible control of hardware and 4) real-time closed-loop control of sensory and optogenetic stimuli, all of which are optimized for large-scale experimentation.


2020 ◽  
Author(s):  
Gary Kane ◽  
Gonçalo Lopes ◽  
Jonny L. Saunders ◽  
Alexander Mathis ◽  
Mackenzie W. Mathis

AbstractThe ability to control a behavioral task or stimulate neural activity based on animal behavior in real-time is an important tool for experimental neuroscientists. Ideally, such tools are noninvasive, low-latency, and provide interfaces to trigger external hardware based on posture. Recent advances in pose estimation with deep learning allows researchers to train deep neural networks to accurately quantify a wide variety of animal behaviors. Here we provide a new DeepLabCut-Live! package that achieves low-latency real-time pose estimation (within 15 ms, >100 FPS), with an additional forward-prediction module that achieves zero-latency feedback, and a dynamic-cropping mode that allows for higher inference speeds. We also provide three options for using this tool with ease: (1) a stand-alone GUI (called DLC-Live! GUI), and integration into (2) Bonsai and (3) AutoPilot. Lastly, we benchmarked performance on a wide range of systems so that experimentalists can easily decide what hardware is required for their needs.


2022 ◽  
Author(s):  
Daesoo Kim ◽  
Dae-Gun Kim ◽  
Anna Shin ◽  
Yong-Cheol Jeong ◽  
Seahyung Park

Artificial intelligence (AI) is an emerging tool for high-resolution behavioural analysis and conduction of human-free behavioural experiments. Here, we applied an AI-based system, AVATAR, which automatically virtualises 3D motions from the detection of 9 body parts. This allows quantification, classification and detection of specific action sequences in real-time and facilitates closed-loop manipulation, triggered by the onset of specific behaviours, in freely moving mice.


Author(s):  
Kevin Luxem ◽  
Falko Fuhrmann ◽  
Johannes Kürsch ◽  
Stefan Remy ◽  
Pavol Bauer

AbstractNaturalistic behavior is highly complex and dynamic. Approaches aiming at understanding how neuronal ensembles generate behavior require robust behavioral quantification in order to correlate the neural activity patterns with behavioral motifs. Here, we present Variational Animal Motion Embedding (VAME), a probabilistic machine learning framework for discovery of the latent structure of animal behavior given an input time series obtained from markerless pose estimation tools.To demonstrate our framework we perform unsupervised behavior phenotyping of APP/PS1 mice, an animal model of Alzheimer disease. Using markerless pose estimates from open-field exploration as input VAME uncovers the distribution of detailed and clearly segmented behavioral motifs. Moreover, we show that the recovered distribution of phenotype-specific motifs can be used to reliably distinguish between APP/PS1 and wildtype mice, while human experts fail to classify the phenotype based on the same video observations. We propose VAME as a versatile and robust tool for unsupervised quantification of behavior across organisms and experimental settings


2020 ◽  
Author(s):  
Johannes Friedrich ◽  
Andrea Giovannucci ◽  
Eftychios A. Pnevmatikakis

AbstractIn-vivo calcium imaging through microendoscopic lenses enables imaging of neuronal populations deep within the brains of freely moving animals. Previously, a constrained matrix factorization approach (CNMF-E) has been suggested to extract single-neuronal activity from microendoscopic data. However, this approach relies on offline batch processing of the entire video data and is demanding both in terms of computing and memory requirements. These drawbacks prevent its applicability to the analysis of large datasets and closed-loop experimental settings. Here we address both issues by introducing two different online algorithms for extracting neuronal activity from streaming microendoscopic data. Our first algorithm presents an online adaptation of the CNMF-E algorithm, which dramatically reduces its memory and computation requirements. Our second algorithm proposes a convolution-based background model for microendoscopic data that enables even faster (real time) processing on GPU hardware. Our approach is modular and can be combined with existing online motion artifact correction and activity deconvolution methods to provide a highly scalable pipeline for microendoscopic data analysis. We apply our algorithms on two previously published typical experimental datasets and show that they yield similar high-quality results as the popular offline approach, but outperform it with regard to computing time and memory requirements.Author summaryCalcium imaging methods enable researchers to measure the activity of genetically-targeted large-scale neuronal subpopulations. Whereas previous methods required the specimen to be stable, e.g. anesthetized or head-fixed, new brain imaging techniques using microendoscopic lenses and miniaturized microscopes have enabled deep brain imaging in freely moving mice.However, the very large background fluctuations, the inevitable movements and distortions of imaging field, and the extensive spatial overlaps of fluorescent signals complicate the goal of efficiently extracting accurate estimates of neural activity from the observed video data. Further, current activity extraction methods are computationally expensive due to the complex background model and are typically applied to imaging data after the experiment is complete. Moreover, in some scenarios it is necessary to perform experiments in real-time and closed-loop – analyzing data on-the-fly to guide the next experimental steps or to control feedback –, and this calls for new methods for accurate real-time processing. Here we address both issues by adapting a popular extraction method to operate online and extend it to utilize GPU hardware that enables real time processing. Our algorithms yield similar high-quality results as the original offline approach, but outperform it with regard to computing time and memory requirements. Our results enable faster and scalable analysis, and open the door to new closed-loop experiments in deep brain areas and on freely-moving preparations.


2018 ◽  
Author(s):  
Judith Lim ◽  
Tansu Celikel

AbstractObjectiveClose-loop control of brain and behavior will benefit from real-time detection of behavioral events to enable low-latency communication with peripheral devices. In animal experiments, this is typically achieved by using sparsely distributed (embedded) sensors that detect animal presence in select regions of interest. High-speed cameras provide high-density sampling across large arenas, capturing the richness of animal behavior, however, the image processing bottleneck prohibits real-time feedback in the context of rapidly evolving behaviors.ApproachHere we developed an open-source software, named PolyTouch, to track animal behavior in large arenas and provide rapid close-loop feedback in ~5.7 ms, ie. average latency from the detection of an event to analog stimulus delivery, e.g. auditory tone, TTL pulse, when tracking a single body. This stand-alone software is written in JAVA. The included wrapper for MATLAB provides experimental flexibility for data acquisition, analysis and visualization.Main resultsAs a proof-of-principle application we deployed the PolyTouch for place awareness training. A user-defined portion of the arena was used as a virtual target; visit (or approach) to the target triggered auditory feedback. We show that mice develop awareness to virtual spaces, tend to stay shorter and move faster when they reside in the virtual target zone if their visits are coupled to relatively high stimulus intensity (≥49dB). Thus, close-loop presentation of perceived aversive feedback is sufficient to condition mice to avoid virtual targets within the span of a single session (~20min).SignificanceNeuromodulation techniques now allow control of neural activity in a cell-type specific manner in spiking resolution. Using animal behavior to drive closed-loop control of neural activity would help to address the neural basis of behavioral state and environmental context-dependent information processing in the brain.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Gary A Kane ◽  
Gonçalo Lopes ◽  
Jonny L Saunders ◽  
Alexander Mathis ◽  
Mackenzie W Mathis

The ability to control a behavioral task or stimulate neural activity based on animal behavior in real-time is an important tool for experimental neuroscientists. Ideally, such tools are noninvasive, low-latency, and provide interfaces to trigger external hardware based on posture. Recent advances in pose estimation with deep learning allows researchers to train deep neural networks to accurately quantify a wide variety of animal behaviors. Here, we provide a new <monospace>DeepLabCut-Live!</monospace> package that achieves low-latency real-time pose estimation (within 15 ms, >100 FPS), with an additional forward-prediction module that achieves zero-latency feedback, and a dynamic-cropping mode that allows for higher inference speeds. We also provide three options for using this tool with ease: (1) a stand-alone GUI (called <monospace>DLC-Live! GUI</monospace>), and integration into (2) <monospace>Bonsai,</monospace> and (3) <monospace>AutoPilot</monospace>. Lastly, we benchmarked performance on a wide range of systems so that experimentalists can easily decide what hardware is required for their needs.


Sign in / Sign up

Export Citation Format

Share Document