scholarly journals VolPy: Automated and scalable analysis pipelines for voltage imaging datasets

2021 ◽  
Vol 17 (4) ◽  
pp. e1008806 ◽  
Author(s):  
Changjia Cai ◽  
Johannes Friedrich ◽  
Amrita Singh ◽  
M. Hossein Eybposh ◽  
Eftychios A. Pnevmatikakis ◽  
...  

Voltage imaging enables monitoring neural activity at sub-millisecond and sub-cellular scale, unlocking the study of subthreshold activity, synchrony, and network dynamics with unprecedented spatio-temporal resolution. However, high data rates (>800MB/s) and low signal-to-noise ratios create bottlenecks for analyzing such datasets. Here we present VolPy, an automated and scalable pipeline to pre-process voltage imaging datasets. VolPy features motion correction, memory mapping, automated segmentation, denoising and spike extraction, all built on a highly parallelizable, modular, and extensible framework optimized for memory and speed. To aid automated segmentation, we introduce a corpus of 24 manually annotated datasets from different preparations, brain areas and voltage indicators. We benchmark VolPy against ground truth segmentation, simulations and electrophysiology recordings, and we compare its performance with existing algorithms in detecting spikes. Our results indicate that VolPy’s performance in spike extraction and scalability are state-of-the-art.

Author(s):  
Changjia Cai ◽  
Johannes Friedrich ◽  
Eftychios A Pnevmatikakis ◽  
Kaspar Podgorski ◽  
Andrea Giovannucci

AbstractVoltage imaging enables monitoring neural activity at sub-millisecond and sub-compartment scale, and therefore opens the path to studying sub-threshold activity, synchrony, and network dynamics with unprecedented spatio-temporal resolution. However, high data rates (>800MB/s) and low signal-to-noise ratios have created a severe bottleneck for analysis of such datasets. Here we present VolPy, the first turn-key, automated and scalable pipeline to pre-process voltage imaging datasets. VolPy features fast motion correction, memory mapping, segmentation, and spike inference, all built on a highly parallelized and computationally efficient framework that optimizes memory and speed. Given the lack of single cell voltage imaging ground truth examples, we introduce a corpus of 24 manually annotated datasets from different preparations and voltage indicators. We benchmark VolPy against this corpus and electrophysiology recordings, demonstrating excellent performance in neuron localization, spike extraction, and scalability.


Author(s):  
R. Prabha ◽  
M. Tom ◽  
M. Rothermel ◽  
E. Baltsavias ◽  
L. Leal-Taixe ◽  
...  

Abstract. Lake ice is a strong climate indicator and has been recognised as part of the Essential Climate Variables (ECV) by the Global Climate Observing System (GCOS). The dynamics of freezing and thawing, and possible shifts of freezing patterns over time, can help in understanding the local and global climate systems. One way to acquire the spatio-temporal information about lake ice formation, independent of clouds, is to analyse webcam images. This paper intends to move towards a universal model for monitoring lake ice with freely available webcam data. We demonstrate good performance, including the ability to generalise across different winters and lakes, with a state-of-the-art Convolutional Neural Network (CNN) model for semantic image segmentation, Deeplab v3+. Moreover, we design a variant of that model, termed Deep-U-Lab, which predicts sharper, more correct segmentation boundaries. We have tested the model’s ability to generalise with data from multiple camera views and two different winters. On average, it achieves Intersection-over-Union (IoU) values of ≈71% across different cameras and ≈69% across different winters, greatly outperforming prior work. Going even further, we show that the model even achieves 60% IoU on arbitrary images scraped from photo-sharing websites. As part of the work, we introduce a new benchmark dataset of webcam images, Photi-LakeIce, from multiple cameras and two different winters, along with pixel-wise ground truth annotations.


2018 ◽  
Author(s):  
Indrasen Singh

Device-to-Device (D2D) Communication and Non Orthogonal Multiple Access (NOMA) have become topics of interest for researchers. They are widely recognized as techniques of the next generation cellular wireless networks. D2D technique offers uninterrupted communication among proximate mobile users without transferring data to the base station. This can provide high data rates and power control mechanisms. If D2D direct link distance is more, or the quality of channel is poor then the direct D2D communication gives larger propagation losses. This type of scenarios use relay assisted D2D communication, for improving the transmission capacity and coverage. Where as NOMA ) is one of the many technologies that promise greater capacity gain and spectral efficiency than the present state of the art, and is a candidate technology for 5G cellular networks In this book, fundamentals, state of the art, applications and research challenges of D2D and NOMA have been discussed in simple language


2018 ◽  
Vol 14 (12) ◽  
pp. 1915-1960 ◽  
Author(s):  
Rudolf Brázdil ◽  
Andrea Kiss ◽  
Jürg Luterbacher ◽  
David J. Nash ◽  
Ladislava Řezníčková

Abstract. The use of documentary evidence to investigate past climatic trends and events has become a recognised approach in recent decades. This contribution presents the state of the art in its application to droughts. The range of documentary evidence is very wide, including general annals, chronicles, memoirs and diaries kept by missionaries, travellers and those specifically interested in the weather; records kept by administrators tasked with keeping accounts and other financial and economic records; legal-administrative evidence; religious sources; letters; songs; newspapers and journals; pictographic evidence; chronograms; epigraphic evidence; early instrumental observations; society commentaries; and compilations and books. These are available from many parts of the world. This variety of documentary information is evaluated with respect to the reconstruction of hydroclimatic conditions (precipitation, drought frequency and drought indices). Documentary-based drought reconstructions are then addressed in terms of long-term spatio-temporal fluctuations, major drought events, relationships with external forcing and large-scale climate drivers, socio-economic impacts and human responses. Documentary-based drought series are also considered from the viewpoint of spatio-temporal variability for certain continents, and their employment together with hydroclimate reconstructions from other proxies (in particular tree rings) is discussed. Finally, conclusions are drawn, and challenges for the future use of documentary evidence in the study of droughts are presented.


2018 ◽  
Vol 7 (1.8) ◽  
pp. 245
Author(s):  
Jayakumari J ◽  
Rakhi K J

With the widespread effective usage of LEDs the visible light communication (VLC) system has brought out an increasing interest in the field of wireless communication recently. VLC is envisioned to be an appealing substitute to RF systems because of the advantages of LEDs such as high communication security, rich spectrum, etc. For achieving bearable inter symbol interference (ISI) and high data rates, OFDM can be employed in VLC. In this paper, the performance of VLC system with popular unipolar versions of OFDM viz. Flip-OFDM and ACO-OFDM is analyzed in fading channels. From the simulation results it is seen that the Flip-OFDM-VLC system outperforms the ACO-OFDM-VLC system in terms of bit error rate and is well suited for future 5G applications.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
João Lobo ◽  
Rui Henriques ◽  
Sara C. Madeira

Abstract Background Three-way data started to gain popularity due to their increasing capacity to describe inherently multivariate and temporal events, such as biological responses, social interactions along time, urban dynamics, or complex geophysical phenomena. Triclustering, subspace clustering of three-way data, enables the discovery of patterns corresponding to data subspaces (triclusters) with values correlated across the three dimensions (observations $$\times$$ × features $$\times$$ × contexts). With increasing number of algorithms being proposed, effectively comparing them with state-of-the-art algorithms is paramount. These comparisons are usually performed using real data, without a known ground-truth, thus limiting the assessments. In this context, we propose a synthetic data generator, G-Tric, allowing the creation of synthetic datasets with configurable properties and the possibility to plant triclusters. The generator is prepared to create datasets resembling real 3-way data from biomedical and social data domains, with the additional advantage of further providing the ground truth (triclustering solution) as output. Results G-Tric can replicate real-world datasets and create new ones that match researchers needs across several properties, including data type (numeric or symbolic), dimensions, and background distribution. Users can tune the patterns and structure that characterize the planted triclusters (subspaces) and how they interact (overlapping). Data quality can also be controlled, by defining the amount of missing, noise or errors. Furthermore, a benchmark of datasets resembling real data is made available, together with the corresponding triclustering solutions (planted triclusters) and generating parameters. Conclusions Triclustering evaluation using G-Tric provides the possibility to combine both intrinsic and extrinsic metrics to compare solutions that produce more reliable analyses. A set of predefined datasets, mimicking widely used three-way data and exploring crucial properties was generated and made available, highlighting G-Tric’s potential to advance triclustering state-of-the-art by easing the process of evaluating the quality of new triclustering approaches.


2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Aysen Degerli ◽  
Mete Ahishali ◽  
Mehmet Yamac ◽  
Serkan Kiranyaz ◽  
Muhammad E. H. Chowdhury ◽  
...  

AbstractComputer-aided diagnosis has become a necessity for accurate and immediate coronavirus disease 2019 (COVID-19) detection to aid treatment and prevent the spread of the virus. Numerous studies have proposed to use Deep Learning techniques for COVID-19 diagnosis. However, they have used very limited chest X-ray (CXR) image repositories for evaluation with a small number, a few hundreds, of COVID-19 samples. Moreover, these methods can neither localize nor grade the severity of COVID-19 infection. For this purpose, recent studies proposed to explore the activation maps of deep networks. However, they remain inaccurate for localizing the actual infestation making them unreliable for clinical use. This study proposes a novel method for the joint localization, severity grading, and detection of COVID-19 from CXR images by generating the so-called infection maps. To accomplish this, we have compiled the largest dataset with 119,316 CXR images including 2951 COVID-19 samples, where the annotation of the ground-truth segmentation masks is performed on CXRs by a novel collaborative human–machine approach. Furthermore, we publicly release the first CXR dataset with the ground-truth segmentation masks of the COVID-19 infected regions. A detailed set of experiments show that state-of-the-art segmentation networks can learn to localize COVID-19 infection with an F1-score of 83.20%, which is significantly superior to the activation maps created by the previous methods. Finally, the proposed approach achieved a COVID-19 detection performance with 94.96% sensitivity and 99.88% specificity.


Author(s):  
Dimitra Flouri ◽  
Daniel Lesnic ◽  
Constantina Chrysochou ◽  
Jehill Parikh ◽  
Peter Thelwall ◽  
...  

Abstract Introduction Model-driven registration (MDR) is a general approach to remove patient motion in quantitative imaging. In this study, we investigate whether MDR can effectively correct the motion in free-breathing MR renography (MRR). Materials and methods MDR was generalised to linear tracer-kinetic models and implemented using 2D or 3D free-form deformations (FFD) with multi-resolution and gradient descent optimization. MDR was evaluated using a kidney-mimicking digital reference object (DRO) and free-breathing patient data acquired at high temporal resolution in multi-slice 2D (5 patients) and 3D acquisitions (8 patients). Registration accuracy was assessed using comparison to ground truth DRO, calculating the Hausdorff distance (HD) between ground truth masks with segmentations and visual evaluation of dynamic images, signal-time courses and parametric maps (all data). Results DRO data showed that the bias and precision of parameter maps after MDR are indistinguishable from motion-free data. MDR led to reduction in HD (HDunregistered = 9.98 ± 9.76, HDregistered = 1.63 ± 0.49). Visual inspection showed that MDR effectively removed motion effects in the dynamic data, leading to a clear improvement in anatomical delineation on parametric maps and a reduction in motion-induced oscillations on signal-time courses. Discussion MDR provides effective motion correction of MRR in synthetic and patient data. Future work is needed to compare the performance against other more established methods.


2021 ◽  
Vol 15 (6) ◽  
pp. 1-21
Author(s):  
Huandong Wang ◽  
Yong Li ◽  
Mu Du ◽  
Zhenhui Li ◽  
Depeng Jin

Both app developers and service providers have strong motivations to understand when and where certain apps are used by users. However, it has been a challenging problem due to the highly skewed and noisy app usage data. Moreover, apps are regarded as independent items in existing studies, which fail to capture the hidden semantics in app usage traces. In this article, we propose App2Vec, a powerful representation learning model to learn the semantic embedding of apps with the consideration of spatio-temporal context. Based on the obtained semantic embeddings, we develop a probabilistic model based on the Bayesian mixture model and Dirichlet process to capture when , where , and what semantics of apps are used to predict the future usage. We evaluate our model using two different app usage datasets, which involve over 1.7 million users and 2,000+ apps. Evaluation results show that our proposed App2Vec algorithm outperforms the state-of-the-art algorithms in app usage prediction with a performance gap of over 17.0%.


2021 ◽  
Vol 7 (2) ◽  
pp. 21
Author(s):  
Roland Perko ◽  
Manfred Klopschitz ◽  
Alexander Almer ◽  
Peter M. Roth

Many scientific studies deal with person counting and density estimation from single images. Recently, convolutional neural networks (CNNs) have been applied for these tasks. Even though often better results are reported, it is often not clear where the improvements are resulting from, and if the proposed approaches would generalize. Thus, the main goal of this paper was to identify the critical aspects of these tasks and to show how these limit state-of-the-art approaches. Based on these findings, we show how to mitigate these limitations. To this end, we implemented a CNN-based baseline approach, which we extended to deal with identified problems. These include the discovery of bias in the reference data sets, ambiguity in ground truth generation, and mismatching of evaluation metrics w.r.t. the training loss function. The experimental results show that our modifications allow for significantly outperforming the baseline in terms of the accuracy of person counts and density estimation. In this way, we get a deeper understanding of CNN-based person density estimation beyond the network architecture. Furthermore, our insights would allow to advance the field of person density estimation in general by highlighting current limitations in the evaluation protocols.


Sign in / Sign up

Export Citation Format

Share Document