scholarly journals D.sort: template based automatic spike sorting tool

2018 ◽  
Author(s):  
Madeny Belkhiri ◽  
Duda Kvitsiani

AbstractUnderstanding how populations of neurons represent and compute internal or external variables requires precise and objective metrics for tracing the individual spikes that belong to a given neuron. Despite recent progress in the development of accurate and fast spike sorting tools, the scarcity of ground truth data makes it difficult to settle on the best performing spike sorting algorithm. Besides, the use of different configurations of electrodes and ways to acquire signal (e.g. anesthetized, head fixed, freely behaving animal recordings, tetrode vs. silicone probes, etc.) makes it even harder to develop a universal spike sorting tool that will perform well without human intervention. Some of the prevalent problems in spike sorting are: units separating due to drift, clustering bursting cells, and dealing with nonstationarity in background noise. The last is particularly problematic in freely behaving animals where the noises from the electrophysiological activity of hundreds or thousands of neurons are intermixed with noise arising from movement artifacts. We address these problems by developing a new spike sorting tool that is based on a template matching algorithm. The spike waveform templates are used to perform normalized cross correlation (NCC) with an acquired signal for spike detection. The normalization addresses problems with drift, bursting, and nonstationarity of noise and provides normative scoring to compare different units in terms of cluster quality. Our spike sorting algorithm, D.sort, runs on the graphic processing unit (GPU) to accelerate computations. D.sort is a freely available software package (https://github.com/1804MB/Kvistiani-lab_Dsort).

2019 ◽  
Author(s):  
Jasper Wouters ◽  
Fabian Kloosterman ◽  
Alexander Bertrand

AbstractSpike sorting is the process of retrieving the spike times of individual neurons that are present in an extracellular neural recording. Over the last decades, many spike sorting algorithms have been published. In an effort to guide a user towards a specific spike sorting algorithm, given a specific recording setting (i.e., brain region and recording device), we provide an open-source graphical tool for the generation of hybrid ground-truth data in Python. Hybrid ground-truth data is a data-driven modelling paradigm in which spikes from a single unit are moved to a different location on the recording probe, thereby generating a virtual unit of which the spike times are known. The tool enables a user to efficiently generate hybrid ground-truth datasets and make informed decisions between spike sorting algorithms, fine-tune the algorithm parameters towards the used recording setting, or get a deeper understanding of those algorithms.


2016 ◽  
Author(s):  
Pierre Yger ◽  
Giulia L.B. Spampinato ◽  
Elric Esposito ◽  
Baptiste Lefebvre ◽  
Stéphane Deny ◽  
...  

AbstractUnderstanding how assemblies of neurons encode information requires recording large populations of cells in the brain. In recent years, multi-electrode arrays and large silicon probes have been developed to record simultaneously from hundreds or thousands of electrodes packed with a high density. However, these new devices challenge the classical way to do spike sorting. Here we developed a new method to solve these issues, based on a highly automated algorithm to extract spikes from extracellular data, and show that this algorithm reached near optimal performance both in vitro and in vivo. The algorithm is composed of two main steps: 1) a “template-finding” phase to extract the cell templates, i.e. the pattern of activity evoked over many electrodes when one neuron fires an action potential; 2) a “template-matching” phase where the templates were matched to the raw data to find the location of the spikes. The manual intervention by the user was reduced to the minimal, and the time spent on manual curation did not scale with the number of electrodes. We tested our algorithm with large-scale data from in vitro and in vivo recordings, from 32 to 4225 electrodes. We performed simultaneous extracellular and patch recordings to obtain “ground truth” data, i.e. cases where the solution to the sorting problem is at least partially known. The performance of our algorithm was always close to the best expected performance. We thus provide a general solution to sort spikes from large-scale extracellular recordings.


2021 ◽  
Vol 2 (1) ◽  
pp. 1-25
Author(s):  
Srinivasan Iyengar ◽  
Stephen Lee ◽  
David Irwin ◽  
Prashant Shenoy ◽  
Benjamin Weil

Buildings consume over 40% of the total energy in modern societies, and improving their energy efficiency can significantly reduce our energy footprint. In this article, we present WattScale, a data-driven approach to identify the least energy-efficient buildings from a large population of buildings in a city or a region. Unlike previous methods such as least-squares that use point estimates, WattScale uses Bayesian inference to capture the stochasticity in the daily energy usage by estimating the distribution of parameters that affect a building. Further, it compares them with similar homes in a given population. WattScale also incorporates a fault detection algorithm to identify the underlying causes of energy inefficiency. We validate our approach using ground truth data from different geographical locations, which showcases its applicability in various settings. WattScale has two execution modes—(i) individual and (ii) region-based, which we highlight using two case studies. For the individual execution mode, we present results from a city containing >10,000 buildings and show that more than half of the buildings are inefficient in one way or another indicating a significant potential from energy improvement measures. Additionally, we provide probable cause of inefficiency and find that 41%, 23.73%, and 0.51% homes have poor building envelope, heating, and cooling system faults, respectively. For the region-based execution mode, we show that WattScale can be extended to millions of homes in the U.S. due to the recent availability of representative energy datasets.


Author(s):  
Himansu Sekhar Pattanayak ◽  
Harsh K. Verma ◽  
Amrit Lal Sangal

Community detection is a pivotal part of network analysis and is classified as an NP-hard problem. In this paper, a novel community detection algorithm is proposed, which probabilistically predicts communities’ diameter using the local information of random seed nodes. The gravitation method is then applied to discover communities surrounding the seed nodes. The individual communities are combined to get the community structure of the whole network. The proposed algorithm, named as Local Gravitational community detection algorithm (LGCDA), can also work with overlapping communities. LGCDA algorithm is evaluated based on quality metrics and ground-truth data by comparing it with some of the widely used community detection algorithms using synthetic and real-world networks.


2016 ◽  
Vol 25 (05) ◽  
pp. 1640003 ◽  
Author(s):  
Yoav Liberman ◽  
Adi Perry

Visual tracking in low frame rate (LFR) videos has many inherent difficulties for achieving accurate target recovery, such as occlusions, abrupt motions and rapid pose changes. Thus, conventional tracking methods cannot be applied reliably. In this paper, we offer a new scheme for tracking objects in low frame rate videos. We present a method of integrating multiple metrics for template matching, as an extension for the particle filter. By inspecting a large data set of videos for tracking, we show that our method not only outperforms other related benchmarks in the field, but it also achieves better results both visually and quantitatively, once compared to actual ground truth data.


AI ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 444-463
Author(s):  
Daniel Weber ◽  
Clemens Gühmann ◽  
Thomas Seel

Inertial-sensor-based attitude estimation is a crucial technology in various applications, from human motion tracking to autonomous aerial and ground vehicles. Application scenarios differ in characteristics of the performed motion, presence of disturbances, and environmental conditions. Since state-of-the-art attitude estimators do not generalize well over these characteristics, their parameters must be tuned for the individual motion characteristics and circumstances. We propose RIANN, a ready-to-use, neural network-based, parameter-free, real-time-capable inertial attitude estimator, which generalizes well across different motion dynamics, environments, and sampling rates, without the need for application-specific adaptations. We gather six publicly available datasets of which we exploit two datasets for the method development and the training, and we use four datasets for evaluation of the trained estimator in three different test scenarios with varying practical relevance. Results show that RIANN outperforms state-of-the-art attitude estimation filters in the sense that it generalizes much better across a variety of motions and conditions in different applications, with different sensor hardware and different sampling frequencies. This is true even if the filters are tuned on each individual test dataset, whereas RIANN was trained on completely separate data and has never seen any of these test datasets. RIANN can be applied directly without adaptations or training and is therefore expected to enable plug-and-play solutions in numerous applications, especially when accuracy is crucial but no ground-truth data is available for tuning or when motion and disturbance characteristics are uncertain. We made RIANN publicly available.


2020 ◽  
Vol 19 (1) ◽  
pp. 185-204 ◽  
Author(s):  
Alessio Paolo Buccino ◽  
Gaute Tomas Einevoll

AbstractWhen recording neural activity from extracellular electrodes, both in vivo and in vitro, spike sorting is a required and very important processing step that allows for identification of single neurons’ activity. Spike sorting is a complex algorithmic procedure, and in recent years many groups have attempted to tackle this problem, resulting in numerous methods and software packages. However, validation of spike sorting techniques is complicated. It is an inherently unsupervised problem and it is hard to find universal metrics to evaluate performance. Simultaneous recordings that combine extracellular and patch-clamp or juxtacellular techniques can provide ground-truth data to evaluate spike sorting methods. However, their utility is limited by the fact that only a few cells can be measured at the same time. Simulated ground-truth recordings can provide a powerful alternative mean to rank the performance of spike sorters. We present here , a Python-based software which permits flexible and fast simulation of extracellular recordings. allows users to generate extracellular signals on various customizable electrode designs and can replicate various problematic aspects for spike sorting, such as bursting, spatio-temporal overlapping events, and drifts. We expect will provide a common testbench for spike sorting development and evaluation, in which spike sorting developers can rapidly generate and evaluate the performance of their algorithms.


2019 ◽  
Author(s):  
Alessio P. Buccino ◽  
Gaute T. Einevoll

AbstractWhen recording neural activity from extracellular electrodes, both in vivo and in vitro, spike sorting is a required and very important processing step that allows for identification of single neurons’ activity. Spike sorting is a complex algorithmic procedure, and in recent years many groups have attempted to tackle this problem, resulting in numerous methods and software packages. However, validation of spike sorting techniques is complicated. It is an inherently unsupervised problem and it is hard to find universal metrics to evaluate performance. Simultaneous recordings that combine extracellular and patch-clamp or juxtacellular techniques can provide ground-truth data to evaluate spike sorting methods. However, their utility is limited by the fact that only a few cells can be measured at the same time. Simulated ground-truth recordings can provide a powerful alternative mean to rank the performance of spike sorters. We present here MEArec, a Python-based software which permits flexible and fast simulation of extracellular recordings. MEArec allows users to generate extracellular signals on various customizable electrode designs and can replicate various problematic aspects for spike sorting, such as bursting, spatio-temporal overlapping events, and drifts. We expect MEArec will provide a common testbench for spike sorting development and evaluation, in which spike sorting developers can rapidly generate and evaluate the performance of their algorithms.


2021 ◽  
Author(s):  
Samuel Garcia ◽  
Alessio Buccino ◽  
Pierre Yger

Recently, a new generation of devices have been developed to record neural activity simultaneously from hundreds of electrodes with a very high spatial density, both for in vitro and in vivo applications. While these advances enable to record from many more cells, they also dramatically increase the amount overlapping "synchronous" spikes (colliding in space and/or in time), challenging the already complicated process of spike sorting (i.e. extracting isolated single-neuron activity from extracellular signals). In this work, we used synthetic ground-truth recordings to quantitatively benchmark the performance of state-of-the-art spike sorters focusing specifically on spike collisions. Our results show that while modern template-matching based algorithms are more accurate than density-based approaches, all methods, to some extent, failed to detect synchronous spike events of neurons with similar extracellular signals. Interestingly, the performance of the sorters is not largely affected by the the spiking activity in the recordings, with respect to average firing rates and spike-train correlation levels.


2021 ◽  
Vol 13 (10) ◽  
pp. 1966
Author(s):  
Christopher W Smith ◽  
Santosh K Panda ◽  
Uma S Bhatt ◽  
Franz J Meyer ◽  
Anushree Badola ◽  
...  

In recent years, there have been rapid improvements in both remote sensing methods and satellite image availability that have the potential to massively improve burn severity assessments of the Alaskan boreal forest. In this study, we utilized recent pre- and post-fire Sentinel-2 satellite imagery of the 2019 Nugget Creek and Shovel Creek burn scars located in Interior Alaska to both assess burn severity across the burn scars and test the effectiveness of several remote sensing methods for generating accurate map products: Normalized Difference Vegetation Index (NDVI), Normalized Burn Ratio (NBR), and Random Forest (RF) and Support Vector Machine (SVM) supervised classification. We used 52 Composite Burn Index (CBI) plots from the Shovel Creek burn scar and 28 from the Nugget Creek burn scar for training classifiers and product validation. For the Shovel Creek burn scar, the RF and SVM machine learning (ML) classification methods outperformed the traditional spectral indices that use linear regression to separate burn severity classes (RF and SVM accuracy, 83.33%, versus NBR accuracy, 73.08%). However, for the Nugget Creek burn scar, the NDVI product (accuracy: 96%) outperformed the other indices and ML classifiers. In this study, we demonstrated that when sufficient ground truth data is available, the ML classifiers can be very effective for reliable mapping of burn severity in the Alaskan boreal forest. Since the performance of ML classifiers are dependent on the quantity of ground truth data, when sufficient ground truth data is available, the ML classification methods would be better at assessing burn severity, whereas with limited ground truth data the traditional spectral indices would be better suited. We also looked at the relationship between burn severity, fuel type, and topography (aspect and slope) and found that the relationship is site-dependent.


Sign in / Sign up

Export Citation Format

Share Document