scholarly journals Reconstruction of in-vivo subthreshold activity of single neurons from large-scale spiking recordings

2019 ◽  
Author(s):  
Stylianos Papaioannou ◽  
André Marques Smith ◽  
David Eriksson

SummaryCurrent developments in the manufacturing of silicon probes allow recording of spikes from large populations of neurons from several brain structures in freely moving animals. It is still, however, technically challenging to record the membrane potential from awake behaving animals. Routine access to the subthreshold activity of neurons would be of great value in order to understand the role of, for example, neuronal integration, oscillations, and excitability. Here we have developed a framework for reconstructing the subthreshold activity of single neurons using the spiking activity from large neuronal populations. The reconstruction accuracy and reliability have been evaluated with ground truth data provided from simultaneous patch clamp membrane potential recordings in-vivo. Given the abundance of large-scale spike recordings in the contemporary systems neuroscience society, this approach provides a general access to the subthreshold activity and hence could shed light on the intricate mechanisms of the genesis of spiking activity.

2016 ◽  
Author(s):  
Pierre Yger ◽  
Giulia L.B. Spampinato ◽  
Elric Esposito ◽  
Baptiste Lefebvre ◽  
Stéphane Deny ◽  
...  

AbstractUnderstanding how assemblies of neurons encode information requires recording large populations of cells in the brain. In recent years, multi-electrode arrays and large silicon probes have been developed to record simultaneously from hundreds or thousands of electrodes packed with a high density. However, these new devices challenge the classical way to do spike sorting. Here we developed a new method to solve these issues, based on a highly automated algorithm to extract spikes from extracellular data, and show that this algorithm reached near optimal performance both in vitro and in vivo. The algorithm is composed of two main steps: 1) a “template-finding” phase to extract the cell templates, i.e. the pattern of activity evoked over many electrodes when one neuron fires an action potential; 2) a “template-matching” phase where the templates were matched to the raw data to find the location of the spikes. The manual intervention by the user was reduced to the minimal, and the time spent on manual curation did not scale with the number of electrodes. We tested our algorithm with large-scale data from in vitro and in vivo recordings, from 32 to 4225 electrodes. We performed simultaneous extracellular and patch recordings to obtain “ground truth” data, i.e. cases where the solution to the sorting problem is at least partially known. The performance of our algorithm was always close to the best expected performance. We thus provide a general solution to sort spikes from large-scale extracellular recordings.


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Pierre Yger ◽  
Giulia LB Spampinato ◽  
Elric Esposito ◽  
Baptiste Lefebvre ◽  
Stéphane Deny ◽  
...  

In recent years, multielectrode arrays and large silicon probes have been developed to record simultaneously between hundreds and thousands of electrodes packed with a high density. However, they require novel methods to extract the spiking activity of large ensembles of neurons. Here, we developed a new toolbox to sort spikes from these large-scale extracellular data. To validate our method, we performed simultaneous extracellular and loose patch recordings in rodents to obtain ‘ground truth’ data, where the solution to this sorting problem is known for one cell. The performance of our algorithm was always close to the best expected performance, over a broad range of signal-to-noise ratios, in vitro and in vivo. The algorithm is entirely parallelized and has been successfully tested on recordings with up to 4225 electrodes. Our toolbox thus offers a generic solution to sort accurately spikes for up to thousands of electrodes.


Author(s):  
Suppawong Tuarob ◽  
Conrad S. Tucker

The acquisition and mining of product feature data from online sources such as customer review websites and large scale social media networks is an emerging area of research. In many existing design methodologies that acquire product feature preferences form online sources, the underlying assumption is that product features expressed by customers are explicitly stated and readily observable to be mined using product feature extraction tools. In many scenarios however, product feature preferences expressed by customers are implicit in nature and do not directly map to engineering design targets. For example, a customer may implicitly state “wow I have to squint to read this on the screen”, when the explicit product feature may be a larger screen. The authors of this work propose an inference model that automatically assigns the most probable explicit product feature desired by a customer, given an implicit preference expressed. The algorithm iteratively refines its inference model by presenting a hypothesis and using ground truth data, determining its statistical validity. A case study involving smartphone product features expressed through Twitter networks is presented to demonstrate the effectiveness of the proposed methodology.


2019 ◽  
Author(s):  
Cody Baker ◽  
Emmanouil Froudarakis ◽  
Dimitri Yatsenko ◽  
Andreas S. Tolias ◽  
Robert Rosenbaum

AbstractA major goal in neuroscience is to estimate neural connectivity from large scale extracellular recordings of neural activity in vivo. This is challenging in part because any such activity is modulated by the unmeasured external synaptic input to the network, known as the common input problem. Many different measures of functional connectivity have been proposed in the literature, but their direct relationship to synaptic connectivity is often assumed or ignored. For in vivo data, measurements of this relationship would require a knowledge of ground truth connectivity, which is nearly always unavailable. Instead, many studies use in silico simulations as benchmarks for investigation, but such approaches necessarily rely upon a variety of simplifying assumptions about the simulated network and can depend on numerous simulation parameters. We combine neuronal network simulations, mathematical analysis, and calcium imaging data to address the question of when and how functional connectivity, synaptic connectivity, and latent external input variability can be untangled. We show numerically and analytically that, even though the precision matrix of recorded spiking activity does not uniquely determine synaptic connectivity, it is often closely related to synaptic connectivity in practice under various network models. This relation becomes more pronounced when the spatial structure of neuronal variability is considered jointly with precision.


2016 ◽  
Author(s):  
Roshni Cooper ◽  
Shaul Yogev ◽  
Kang Shen ◽  
Mark Horowitz

AbstractMotivation:Microtubules (MTs) are polarized polymers that are critical for cell structure and axonal transport. They form a bundle in neurons, but beyond that, their organization is relatively unstudied.Results:We present MTQuant, a method for quantifying MT organization using light microscopy, which distills three parameters from MT images: the spacing of MT minus-ends, their average length, and the average number of MTs in a cross-section of the bundle. This method allows for robust and rapid in vivo analysis of MTs, rendering it more practical and more widely applicable than commonly-used electron microscopy reconstructions. MTQuant was successfully validated with three ground truth data sets and applied to over 3000 images of MTs in a C. elegans motor neuron.Availability:MATLAB code is available at http://roscoope.github.io/MTQuantContact:[email protected] informationSupplementary data are available at Bioinformatics online.


Author(s):  
Marian Muste ◽  
Ton Hoitink

With a continuous global increase in flood frequency and intensity, there is an immediate need for new science-based solutions for flood mitigation, resilience, and adaptation that can be quickly deployed in any flood-prone area. An integral part of these solutions is the availability of river discharge measurements delivered in real time with high spatiotemporal density and over large-scale areas. Stream stages and the associated discharges are the most perceivable variables of the water cycle and the ones that eventually determine the levels of hazard during floods. Consequently, the availability of discharge records (a.k.a. streamflows) is paramount for flood-risk management because they provide actionable information for organizing the activities before, during, and after floods, and they supply the data for planning and designing floodplain infrastructure. Moreover, the discharge records represent the ground-truth data for developing and continuously improving the accuracy of the hydrologic models used for forecasting streamflows. Acquiring discharge data for streams is critically important not only for flood forecasting and monitoring but also for many other practical uses, such as monitoring water abstractions for supporting decisions in various socioeconomic activities (from agriculture to industry, transportation, and recreation) and for ensuring healthy ecological flows. All these activities require knowledge of past, current, and future flows in rivers and streams. Given its importance, an ability to measure the flow in channels has preoccupied water users for millennia. Starting with the simplest volumetric methods to estimate flows, the measurement of discharge has evolved through continued innovation to sophisticated methods so that today we can continuously acquire and communicate the data in real time. There is no essential difference between the instruments and methods used to acquire streamflow data during normal conditions versus during floods. The measurements during floods are, however, complex, hazardous, and of limited accuracy compared with those acquired during normal flows. The essential differences in the configuration and operation of the instruments and methods for discharge estimation stem from the type of measurements they acquire—that is, discrete and autonomous measurements (i.e., measurements that can be taken any time any place) and those acquired continuously (i.e., estimates based on indirect methods developed for fixed locations). Regardless of the measurement situation and approach, the main concern of the data providers for flooding (as well as for other areas of water resource management) is the timely delivery of accurate discharge data at flood-prone locations across river basins.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ranjit Mahato ◽  
Gibji Nimasow ◽  
Oyi Dai Nimasow ◽  
Dhoni Bushi

AbstractSonitpur and Udalguri district of Assam possess rich tropical forests with equally important faunal species. The Nameri National Park, Sonai-Rupai Wildlife Sanctuary, and other Reserved Forests are areas of attraction for tourists and wildlife lovers. However, these protected areas are reportedly facing the problem of encroachment and large-scale deforestation. Therefore, this study attempts to estimate the forest cover change in the area through integrating the remotely sensed data of 1990, 2000, 2010, and 2020 with the Geographic Information System. The Maximum Likelihood algorithm-based supervised classification shows acceptable agreement between the classified image and the ground truth data with an overall accuracy of about 96% and a Kappa coefficient of 0.95. The results reveal a forest cover loss of 7.47% from 1990 to 2000 and 7.11% from 2000 to 2010. However, there was a slight gain of 2.34% in forest cover from 2010 to 2020. The net change of forest to non-forest was 195.17 km2 in the last forty years. The forest transition map shows a declining trend of forest remained forest till 2010 and a slight increase after that. There was a considerable decline in the forest to non-forest (11.94% to 3.50%) from 2000–2010 to 2010–2020. Further, a perceptible gain was also observed in the non-forest to the forest during the last four decades. The overlay analysis of forest cover maps show an area of 460.76 km2 (28.89%) as forest (unchanged), 764.21 km2 (47.91%) as non-forest (unchanged), 282.67 km2 (17.72%) as deforestation and 87.50 km2 (5.48%) as afforestation. The study found hotspots of deforestation in the closest areas of National Park, Wildlife Sanctuary, and Reserved Forests due to encroachments for human habitation, agriculture, and timber/fuelwood extractions. Therefore, the study suggests an early declaration of these protected areas as Eco-Sensitive Zone to control the increasing trends of deforestation.


Author(s):  
Zhihan Fang ◽  
Yu Yang ◽  
Guang Yang ◽  
Yikuan Xian ◽  
Fan Zhang ◽  
...  

Data from the cellular network have been proved as one of the most promising way to understand large-scale human mobility for various ubiquitous computing applications due to the high penetration of cellphones and low collection cost. Existing mobility models driven by cellular network data suffer from sparse spatial-temporal observations because user locations are recorded with cellphone activities, e.g., calls, text, or internet access. In this paper, we design a human mobility recovery system called CellSense to take the sparse cellular billing data (CBR) as input and outputs dense continuous records to recover the sensing gap when using cellular networks as sensing systems to sense the human mobility. There is limited work on this kind of recovery systems at large scale because even though it is straightforward to design a recovery system based on regression models, it is very challenging to evaluate these models at large scale due to the lack of the ground truth data. In this paper, we explore a new opportunity based on the upgrade of cellular infrastructures to obtain cellular network signaling data as the ground truth data, which log the interaction between cellphones and cellular towers at signal levels (e.g., attaching, detaching, paging) even without billable activities. Based on the signaling data, we design a system CellSense for human mobility recovery by integrating collective mobility patterns with individual mobility modeling, which achieves the 35.3% improvement over the state-of-the-art models. The key application of our recovery model is to take regular sparse CBR data that a researcher already has, and to recover the missing data due to sensing gaps of CBR data to produce a dense cellular data for them to train a machine learning model for their use cases, e.g., next location prediction.


2013 ◽  
Vol 30 (10) ◽  
pp. 2452-2464 ◽  
Author(s):  
J. H. Middleton ◽  
C. G. Cooke ◽  
E. T. Kearney ◽  
P. J. Mumford ◽  
M. A. Mole ◽  
...  

Abstract Airborne scanning laser technology provides an effective method to systematically survey surface topography and changes in that topography with time. In this paper, the authors describe the capability of a rapid-response lidar system in which airborne observations are utilized to describe results from a set of surveys of Narrabeen–Collaroy Beach, Sydney, New South Wales, Australia, over a short period of time during which significant erosion and deposition of the subaerial beach occurred. The airborne lidar data were obtained using a Riegl Q240i lidar coupled with a NovAtel SPAN-CPT integrated Global Navigation Satellite System (GNSS) and inertial unit and flown at various altitudes. A set of the airborne lidar data is compared with ground-truth data acquired from the beach using a GNSS/real-time kinematic (RTK) system mounted on an all-terrain vehicle. The comparison shows consistency between systems, with the airborne lidar data being less than 0.02 m different from the ground-truth data when four surveys are undertaken, provided a method of removing outliers—developed here and designated as “weaving”—is used. The combination of airborne lidar data with ground-truth data provides an excellent method of obtaining high-quality topographic data. Using the results from this analysis, it is shown that airborne lidar data alone produce results that can be used for ongoing large-scale surveys of beaches with reliable accuracy, and that the enhanced accuracy resulting from multiple airborne surveys can be assessed quantitatively.


2020 ◽  
Vol 24 ◽  
pp. 63-86
Author(s):  
Francisco Mena ◽  
Ricardo Ñanculef ◽  
Carlos Valle

The lack of annotated data is one of the major barriers facing machine learning applications today. Learning from crowds, i.e. collecting ground-truth data from multiple inexpensive annotators, has become a common method to cope with this issue. It has been recently shown that modeling the varying quality of the annotations obtained in this way, is fundamental to obtain satisfactory performance in tasks where inexpert annotators may represent the majority but not the most trusted group. Unfortunately, existing techniques represent annotation patterns for each annotator individually, making the models difficult to estimate in large-scale scenarios. In this paper, we present two models to address these problems. Both methods are based on the hypothesis that it is possible to learn collective annotation patterns by introducing confusion matrices that involve groups of data point annotations or annotators. The first approach clusters data points with a common annotation pattern, regardless the annotators from which the labels have been obtained. Implicitly, this method attributes annotation mistakes to the complexity of the data itself and not to the variable behavior of the annotators. The second approach explicitly maps annotators to latent groups that are collectively parametrized to learn a common annotation pattern. Our experimental results show that, compared with other methods for learning from crowds, both methods have advantages in scenarios with a large number of annotators and a small number of annotations per annotator.


Sign in / Sign up

Export Citation Format

Share Document