scholarly journals BonZeb: open-source, modular software tools for high-resolution zebrafish tracking and analysis

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Nicholas C. Guilbeault ◽  
Jordan Guerguiev ◽  
Michael Martin ◽  
Isabelle Tate ◽  
Tod R. Thiele

AbstractWe present BonZeb—a suite of modular Bonsai packages which allow high-resolution zebrafish tracking with dynamic visual feedback. Bonsai is an increasingly popular software platform that is accelerating the standardization of experimental protocols within the neurosciences due to its speed, flexibility, and minimal programming overhead. BonZeb can be implemented into novel and existing Bonsai workflows for online behavioral tracking and offline tracking with batch processing. We demonstrate that BonZeb can run a variety of experimental configurations used for gaining insights into the neural mechanisms of zebrafish behavior. BonZeb supports head-fixed closed-loop and free-swimming virtual open-loop assays as well as multi-animal tracking, optogenetic stimulation, and calcium imaging during behavior. The combined performance, ease of use and versatility of BonZeb opens new experimental avenues for researchers seeking high-resolution behavioral tracking of larval zebrafish.

2021 ◽  
Author(s):  
Nicholas C. Guilbeault ◽  
Jordan Guerguiev ◽  
Michael Martin ◽  
Isabelle Tate ◽  
Tod R. Thiele

AbstractWe present BonZeb – a suite of modular Bonsai packages which allow high-resolution zebrafish tracking with dynamic visual feedback. Bonsai is an increasingly popular software platform that is accelerating the standardization of experimental protocols within the neurosciences due to its speed, flexibility, and minimal programming overhead. BonZeb can be implemented into novel and existing Bonsai workflows for online behavioral tracking and offline tracking with batch processing. We demonstrate that BonZeb can run a variety of experimental configurations used for gaining insights into the neural mechanisms of zebrafish behavior. BonZeb supports head-fixed closed-loop and free-swimming virtual open-loop assays as well as multi-animal tracking, optogenetic stimulation, and calcium imaging during behavior. The combined performance, ease of use and versatility of BonZeb opens new experimental avenues for researchers seeking high-resolution behavioral tracking of larval zebrafish.


2018 ◽  
Author(s):  
Rishi Rajalingham ◽  
Elias B. Issa ◽  
Pouya Bashivan ◽  
Kohitij Kar ◽  
Kailyn Schmidt ◽  
...  

ABSTRACTPrimates—including humans—can typically recognize objects in visual images at a glance even in the face of naturally occurring identity-preserving image transformations (e.g. changes in viewpoint). A primary neuroscience goal is to uncover neuron-level mechanistic models that quantitatively explain this behavior by predicting primate performance for each and every image. Here, we applied this stringent behavioral prediction test to the leading mechanistic models of primate vision (specifically, deep, convolutional, artificial neural networks; ANNs) by directly comparing their behavioral signatures against those of humans and rhesus macaque monkeys. Using high-throughput data collection systems for human and monkey psychophysics, we collected over one million behavioral trials for 2400 images over 276 binary object discrimination tasks. Consistent with previous work, we observed that state-of-the-art deep, feed-forward convolutional ANNs trained for visual categorization (termed DCNNIC models) accurately predicted primate patterns of object-level confusion. However, when we examined behavioral performance for individual images within each object discrimination task, we found that all tested DCNNIC models were significantly non-predictive of primate performance, and that this prediction failure was not accounted for by simple image attributes, nor rescued by simple model modifications. These results show that current DCNNIC models cannot account for the image-level behavioral patterns of primates, and that new ANN models are needed to more precisely capture the neural mechanisms underlying primate object vision. To this end, large-scale, high-resolution primate behavioral benchmarks—such as those obtained here—could serve as direct guides for discovering such models.SIGNIFICANCE STATEMENTRecently, specific feed-forward deep convolutional artificial neural networks (ANNs) models have dramatically advanced our quantitative understanding of the neural mechanisms underlying primate core object recognition. In this work, we tested the limits of those ANNs by systematically comparing the behavioral responses of these models with the behavioral responses of humans and monkeys, at the resolution of individual images. Using these high-resolution metrics, we found that all tested ANN models significantly diverged from primate behavior. Going forward, these high-resolution, large-scale primate behavioral benchmarks could serve as direct guides for discovering better ANN models of the primate visual system.


2020 ◽  
Vol 46 (1) ◽  
pp. 14-19
Author(s):  
Caroline Geraldi Pierozzi ◽  
Ricardo Toshio Fujihara ◽  
Efrain de Santana Souza ◽  
Marília Pizetta ◽  
Maria Márcia Pereira Sartori ◽  
...  

ABSTRACT Interactive keys are tools that aid research and technical work since identification of organisms has become increasingly present in the scientific and academic context. An interactive key was developed with the software Lucid v. 3.3 for the identification of eleven fungal species associated with onion, carrot, pepper and tomato seeds. It was based on a matrix composed of six features: crop, conidium, conidiophore, color of long conidiophore, color of mycelium and presence of setae, besides 21 character states. In addition, descriptions, illustrations and high-resolution photographs of the morphological characters and states were made available to aid in the correct identification of fungal species. Validation of the interactive key was performed by distinct groups of volunteers: (i) graduate students with prior knowledge and using the interactive key; (ii) undergraduate students with little prior knowledge and using the interactive key, and (iii) undergraduate students with little prior knowledge and using the conventional identification system such as the printed manuals used in seed pathology laboratories. We analyzed the time spent by each volunteer to evaluate 25 seeds infected with the fungal species in the key, as well as the percentage of success and the difficulty level for each participant. The high percentage of correct answers with the use of the interactive key and the ease of use by the volunteers confirmed its efficiency because there was an increase in the identification accuracy when compared to the conventional system. Furthermore, the rate of success and the difficulty level presented low variability within groups (i) and (ii). These results are a consequence of the interaction of the user with characteristics of the developed tool, such as high-resolution photographs, which faithfully reproduce the fungal characteristics observed in the seeds under a stereomicroscope. Thus, the interactive key presented here can aid in teaching, institutional and commercial research, inspection and certification of seeds, making diagnosis safer and more accurate. The key is available for free at https://keys.lucidcentral.org/keys/v3/seed_fungi/.


2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi230-vi230
Author(s):  
Sadaf Soloukey ◽  
Luuk Verhoef ◽  
Frits Mastik ◽  
Bastian Generowicz ◽  
Eelke Bos ◽  
...  

Abstract BACKGROUND Neurosurgical practice still relies heavily on pre-operatively acquired images to guide tumor resections, a practice which comes with inherent pitfalls such as registration inaccuracy due to brain shift, and lack of real-time functional or morphological feedback. Here we describe functional Ultrasound (fUS) as a new high-resolution, depth-resolved, MRI/CT-registered imaging technique able to detect functional regions and vascular morphology during awake and anesthesized tumor resections. MATERIALS AND METHODS fUS relies on high-frame-rate (HFR) ultrasound, making the technique sensitive to very small motions caused by vascular dynamics (µDoppler) and allowing measurements of changes in cerebral blood volume (CBV) with micrometer-millisecond precision. This opens up the possibility to 1) detect functional response, as CBV-changes reflect changes in metabolism of activated neurons through neurovascular coupling, and 2) visualize in-vivo vascular morphology of pathological and healthy tissue with high resolution at unprecedented depths. During a range of anesthetized and awake neurosurgical procedures we acquired vascular and functional images of brain and spinal cord using conventional ultrasound probes connected to a research acquisition system. Building on Brainlab’s Intra-Operative Navigation modules, we co-registered our intra-operative Power Doppler Images (PDIs) to patient-registered MRI/CT-data in real-time. RESULTS During meningioma and glioma resections, our co-registered PDIs revealed fUS’ ability to visualize the tumor’s feeding vessels and vascular borders in real-time, with a level of detail unprecedented by conventional MRI-sequences. During awake resections, fUS was able to detect distinct, ESM-confirmed functional areas as activated during conventional motor and language tasks. In all cases, images were acquired with micrometer-millisecond (300 µm, 1.5–2.0 ms) precision at imaging depths exceeding 5 cm. CONCLUSION fUS is a new real-time, high-resolution and depth-resolved imaging technique, combining favorable imaging specifications with characteristics such as mobility and ease of use which are uniquely beneficial for a potential image-guided neurosurgical tool.


2020 ◽  
Vol 10 (23) ◽  
pp. 13044-13056
Author(s):  
Ruben Evens ◽  
Greg Conway ◽  
Kirsty Franklin ◽  
Ian Henderson ◽  
Jennifer Stockdale ◽  
...  

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Astha Jaiswal ◽  
Christian H. Hoerth ◽  
Ana M. Zúñiga Pereira ◽  
Holger Lorenz

Abstract Induced morphology changes of cells and organelles are by far the easiest way to determine precise protein sub-locations and organelle quantities in light microscopy. By using hypotonic solutions to swell mammalian cell organelles we demonstrate that precise membrane, lumen or matrix protein locations within the endoplasmic reticulum, Golgi and mitochondria can reliably be established. We also show the benefit of this approach for organelle quantifications, especially for clumped or intertwined organelles like peroxisomes and mitochondria. Since cell and organelle swelling is reversible, it can be applied to live cells for successive high-resolution analyses. Our approach outperforms many existing imaging modalities with respect to resolution, ease-of-use and cost-effectiveness without excluding any co-utilization with existing optical (super)resolution techniques.


2013 ◽  
Vol 78 (2) ◽  
pp. 409-419 ◽  
Author(s):  
Alireza Abolhasani ◽  
Mohammad Tohidi ◽  
Khayrollah Hadidi ◽  
Abdollah Khoei

Author(s):  
Niels Buchhold ◽  
Christian Baumgartner

This paper presents a new optical, multi-functional, high-resolution 3-axis sensor which serves to navigate and can, for example, replace standard joysticks in medical devices such as electric wheelchairs, surgical robots or medical diagnosis devices. A light source, e.g. a laser diode is affixed to a movable axis and projects a random geometric shape on an image sensor (CMOS or CCD). The software in the downstream microcontroller identifies the geometric shape’s center, distortion and size, then calculates X, Y, and Z coordinates. These coordinates can then be processed in attached devices. The 3-axis sensor is characterized by its very high resolution, precise reproducibility and plausibility of the coordinates produced. In addition, optical processing of the signal provides a high level of safety against electromagnetic and radio frequency interference. The sensor presented here is adaptive and can be adjusted to fit a user’s range of motion (stroke and force). This recommendation aims to optimize sensor systems such as joysticks in medical devices in terms of safety, ease of use, and adaptability.


Sign in / Sign up

Export Citation Format

Share Document