scholarly journals SHYBRID: A graphical tool for generating hybrid ground-truth spiking data for evaluating spike sorting performance

2019 ◽  
Author(s):  
Jasper Wouters ◽  
Fabian Kloosterman ◽  
Alexander Bertrand

AbstractSpike sorting is the process of retrieving the spike times of individual neurons that are present in an extracellular neural recording. Over the last decades, many spike sorting algorithms have been published. In an effort to guide a user towards a specific spike sorting algorithm, given a specific recording setting (i.e., brain region and recording device), we provide an open-source graphical tool for the generation of hybrid ground-truth data in Python. Hybrid ground-truth data is a data-driven modelling paradigm in which spikes from a single unit are moved to a different location on the recording probe, thereby generating a virtual unit of which the spike times are known. The tool enables a user to efficiently generate hybrid ground-truth datasets and make informed decisions between spike sorting algorithms, fine-tune the algorithm parameters towards the used recording setting, or get a deeper understanding of those algorithms.

2018 ◽  
Author(s):  
Madeny Belkhiri ◽  
Duda Kvitsiani

AbstractUnderstanding how populations of neurons represent and compute internal or external variables requires precise and objective metrics for tracing the individual spikes that belong to a given neuron. Despite recent progress in the development of accurate and fast spike sorting tools, the scarcity of ground truth data makes it difficult to settle on the best performing spike sorting algorithm. Besides, the use of different configurations of electrodes and ways to acquire signal (e.g. anesthetized, head fixed, freely behaving animal recordings, tetrode vs. silicone probes, etc.) makes it even harder to develop a universal spike sorting tool that will perform well without human intervention. Some of the prevalent problems in spike sorting are: units separating due to drift, clustering bursting cells, and dealing with nonstationarity in background noise. The last is particularly problematic in freely behaving animals where the noises from the electrophysiological activity of hundreds or thousands of neurons are intermixed with noise arising from movement artifacts. We address these problems by developing a new spike sorting tool that is based on a template matching algorithm. The spike waveform templates are used to perform normalized cross correlation (NCC) with an acquired signal for spike detection. The normalization addresses problems with drift, bursting, and nonstationarity of noise and provides normative scoring to compare different units in terms of cluster quality. Our spike sorting algorithm, D.sort, runs on the graphic processing unit (GPU) to accelerate computations. D.sort is a freely available software package (https://github.com/1804MB/Kvistiani-lab_Dsort).


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Jeremy Magland ◽  
James J Jun ◽  
Elizabeth Lovero ◽  
Alexander J Morley ◽  
Cole Lincoln Hurwitz ◽  
...  

Spike sorting is a crucial step in electrophysiological studies of neuronal activity. While many spike sorting packages are available, there is little consensus about which are most accurate under different experimental conditions. SpikeForest is an open-source and reproducible software suite that benchmarks the performance of automated spike sorting algorithms across an extensive, curated database of ground-truth electrophysiological recordings, displaying results interactively on a continuously-updating website. With contributions from eleven laboratories, our database currently comprises 650 recordings (1.3 TB total size) with around 35,000 ground-truth units. These data include paired intracellular/extracellular recordings and state-of-the-art simulated recordings. Ten of the most popular spike sorting codes are wrapped in a Python package and evaluated on a compute cluster using an automated pipeline. SpikeForest documents community progress in automated spike sorting, and guides neuroscientists to an optimal choice of sorter and parameters for a wide range of probes and brain regions.


2020 ◽  
Vol 19 (1) ◽  
pp. 141-158 ◽  
Author(s):  
Jasper Wouters ◽  
Fabian Kloosterman ◽  
Alexander Bertrand

2020 ◽  
Vol 890 (2) ◽  
pp. 103 ◽  
Author(s):  
Shin Toriumi ◽  
Shinsuke Takasao ◽  
Mark C. M. Cheung ◽  
Chaowei Jiang ◽  
Yang Guo ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ramin Toosi ◽  
Mohammad Ali Akhaee ◽  
Mohammad-Reza A. Dehaqani

AbstractDeveloping high-density electrodes for recording large ensembles of neurons provides a unique opportunity for understanding the mechanism of the neuronal circuits. Nevertheless, the change of brain tissue around chronically implanted neural electrodes usually causes spike wave-shape distortion and raises the crucial issue of spike sorting with an unstable structure. The automatic spike sorting algorithms have been developed to extract spikes from these big extracellular data. However, due to the spike wave-shape instability, there have been a lack of robust spike detection procedures and clustering to overcome the spike loss problem. Here, we develop an automatic spike sorting algorithm based on adaptive spike detection and a mixture of skew-t distributions to address these distortions and instabilities. The adaptive detection procedure applies to the detected spikes, consists of multi-point alignment and statistical filtering for removing mistakenly detected spikes. The detected spikes are clustered based on the mixture of skew-t distributions to deal with non-symmetrical clusters and spike loss problems. The proposed algorithm improves the performance of the spike sorting in both terms of precision and recall, over a broad range of signal-to-noise ratios. Furthermore, the proposed algorithm has been validated on different datasets and demonstrates a general solution to precise spike sorting, in vitro and in vivo.


2020 ◽  
Author(s):  
Gabriele Marini ◽  
Benjamin Tag ◽  
Jorge Goncalves ◽  
Eduardo Velloso ◽  
Raja Jurdak ◽  
...  

BACKGROUND The use of location-based data in clinical settings is often limited to real-time monitoring. In this study, we aim to develop a proximity-based localization system and show how its longitudinal deployment can provide operational insights related to staff and patients' mobility and room occupancy in clinical settings. Such a streamlined data-driven approach can help in increasing the uptime of operating rooms and more broadly provide an improved understanding of facility utilization. OBJECTIVE The aim of this study is to measure the accuracy of the system and algorithmically calculate measures of mobility and occupancy. METHODS We developed a Bluetooth low energy, proximity-based localization system and deployed it in a hospital for 30 days. The system recorded the position of 75 people (17 patients and 55 staff) during this period. In addition, we collected ground-truth data and used them to validate system performance and accuracy. A number of analyses were conducted to estimate how people move in the hospital and where they spend their time. RESULTS Using ground-truth data, we estimated the accuracy of our system to be 96%. Using mobility trace analysis, we generated occupancy rates for different rooms in the hospital occupied by both staff and patients. We were also able to measure how much time, on average, patients spend in different rooms of the hospital. Finally, using unsupervised hierarchical clustering, we showed that the system could differentiate between staff and patients without training. CONCLUSIONS Analysis of longitudinal, location-based data can offer rich operational insights into hospital efficiency. In particular, they allow quick and consistent assessment of new strategies and protocols and provide a quantitative way to measure their effectiveness.


2016 ◽  
Vol 264 ◽  
pp. 65-77 ◽  
Author(s):  
Alex H. Barnett ◽  
Jeremy F. Magland ◽  
Leslie F. Greengard

2016 ◽  
Author(s):  
Pierre Yger ◽  
Giulia L.B. Spampinato ◽  
Elric Esposito ◽  
Baptiste Lefebvre ◽  
Stéphane Deny ◽  
...  

AbstractUnderstanding how assemblies of neurons encode information requires recording large populations of cells in the brain. In recent years, multi-electrode arrays and large silicon probes have been developed to record simultaneously from hundreds or thousands of electrodes packed with a high density. However, these new devices challenge the classical way to do spike sorting. Here we developed a new method to solve these issues, based on a highly automated algorithm to extract spikes from extracellular data, and show that this algorithm reached near optimal performance both in vitro and in vivo. The algorithm is composed of two main steps: 1) a “template-finding” phase to extract the cell templates, i.e. the pattern of activity evoked over many electrodes when one neuron fires an action potential; 2) a “template-matching” phase where the templates were matched to the raw data to find the location of the spikes. The manual intervention by the user was reduced to the minimal, and the time spent on manual curation did not scale with the number of electrodes. We tested our algorithm with large-scale data from in vitro and in vivo recordings, from 32 to 4225 electrodes. We performed simultaneous extracellular and patch recordings to obtain “ground truth” data, i.e. cases where the solution to the sorting problem is at least partially known. The performance of our algorithm was always close to the best expected performance. We thus provide a general solution to sort spikes from large-scale extracellular recordings.


Author(s):  
Nathan J Hall ◽  
David J Herzfeld ◽  
Stephen G Lisberger

We evaluate existing spike sorters and present a new one that resolves many sorting challenges. The new sorter, called "full binary pursuit" or FBP, comprises multiple steps. First, it thresholds and clusters to identify the waveforms of all unique neurons in the recording. Second, it uses greedy binary pursuit to optimally assign all the spike events in the original voltages to separable neurons. Third, it resolves spike events that are described more accurately as the superposition of spikes from two other neurons. Fourth, it resolves situations where the recorded neurons drift in amplitude or across electrode contacts during a long recording session. Comparison with other sorters on ground-truth datasets reveals many of the failure modes of spike sorting. We examine overall spike sorter performance in ground-truth datasets and suggest post-sorting analyses that can improve the veracity of neural analyses by minimizing the intrusion of failure modes into analysis and interpretation of neural data. Our analysis reveals the tradeoff between the number of channels a sorter can process, speed of sorting, and some of the failure modes of spike sorting. FBP works best on data from 32 channels or fewer. It trades speed and number of channels for avoidance of specific failure modes that would be challenges for some use cases. We conclude that all spike sorting algorithms studied have advantages and shortcomings, and the appropriate use of a spike sorter requires a detailed assessment of the data being sorted and the experimental goals for analyses.


10.2196/19874 ◽  
2020 ◽  
Vol 8 (10) ◽  
pp. e19874
Author(s):  
Gabriele Marini ◽  
Benjamin Tag ◽  
Jorge Goncalves ◽  
Eduardo Velloso ◽  
Raja Jurdak ◽  
...  

Background The use of location-based data in clinical settings is often limited to real-time monitoring. In this study, we aim to develop a proximity-based localization system and show how its longitudinal deployment can provide operational insights related to staff and patients' mobility and room occupancy in clinical settings. Such a streamlined data-driven approach can help in increasing the uptime of operating rooms and more broadly provide an improved understanding of facility utilization. Objective The aim of this study is to measure the accuracy of the system and algorithmically calculate measures of mobility and occupancy. Methods We developed a Bluetooth low energy, proximity-based localization system and deployed it in a hospital for 30 days. The system recorded the position of 75 people (17 patients and 55 staff) during this period. In addition, we collected ground-truth data and used them to validate system performance and accuracy. A number of analyses were conducted to estimate how people move in the hospital and where they spend their time. Results Using ground-truth data, we estimated the accuracy of our system to be 96%. Using mobility trace analysis, we generated occupancy rates for different rooms in the hospital occupied by both staff and patients. We were also able to measure how much time, on average, patients spend in different rooms of the hospital. Finally, using unsupervised hierarchical clustering, we showed that the system could differentiate between staff and patients without training. Conclusions Analysis of longitudinal, location-based data can offer rich operational insights into hospital efficiency. In particular, they allow quick and consistent assessment of new strategies and protocols and provide a quantitative way to measure their effectiveness.


Sign in / Sign up

Export Citation Format

Share Document