scholarly journals Low Computational-cost Cell Detection Method for Calcium Imaging Data

2018 ◽  
Author(s):  
Tsubasa Ito ◽  
Keisuke Ota ◽  
Kanako Ueno ◽  
Yasuhiro Oisi ◽  
Chie Matsubara ◽  
...  

AbstractThe rapid progress of calcium imaging has reached a point where the activity of tens of thousands of cells can be recorded simultaneously. However, the huge amount of data in such records makes it difficult to carry out cell detection manually. Consequently, because the cell detection is the first step of multicellular data analysis, there is a pressing need for automatic cell detection methods for large-scale image data. Automatic cell detection algorithms have been pioneered by a handful of research groups. Such algorithms, however, assume a conventional field of view (FOV) (i.e. 512 × 512 pixels) and need a significantly higher computational power for a wider FOV to work within a practical period of time. To overcome this issue, we propose a method called low computational-cost cell detection (LCCD), which can complete its processing even on the latest ultra-large FOV data within a practical period of time. We compared it with two previously proposed methods, constrained non-negative matrix factorization (CNMF) and Suite2P. We found that LCCD makes it possible to detect cells from a huge-amount of high-density imaging data within a shorter period of time and with an accuracy comparable to or better than those of CNMF and Suite2P.


2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Fuyong Xing ◽  
Yuanpu Xie ◽  
Xiaoshuang Shi ◽  
Pingjun Chen ◽  
Zizhao Zhang ◽  
...  

Abstract Background Nucleus or cell detection is a fundamental task in microscopy image analysis and supports many other quantitative studies such as object counting, segmentation, tracking, etc. Deep neural networks are emerging as a powerful tool for biomedical image computing; in particular, convolutional neural networks have been widely applied to nucleus/cell detection in microscopy images. However, almost all models are tailored for specific datasets and their applicability to other microscopy image data remains unknown. Some existing studies casually learn and evaluate deep neural networks on multiple microscopy datasets, but there are still several critical, open questions to be addressed. Results We analyze the applicability of deep models specifically for nucleus detection across a wide variety of microscopy image data. More specifically, we present a fully convolutional network-based regression model and extensively evaluate it on large-scale digital pathology and microscopy image datasets, which consist of 23 organs (or cancer diseases) and come from multiple institutions. We demonstrate that for a specific target dataset, training with images from the same types of organs might be usually necessary for nucleus detection. Although the images can be visually similar due to the same staining technique and imaging protocol, deep models learned with images from different organs might not deliver desirable results and would require model fine-tuning to be on a par with those trained with target data. We also observe that training with a mixture of target and other/non-target data does not always mean a higher accuracy of nucleus detection, and it might require proper data manipulation during model training to achieve good performance. Conclusions We conduct a systematic case study on deep models for nucleus detection in a wide variety of microscopy images, aiming to address several important but previously understudied questions. We present and extensively evaluate an end-to-end, pixel-to-pixel fully convolutional regression network and report a few significant findings, some of which might have not been reported in previous studies. The model performance analysis and observations would be helpful to nucleus detection in microscopy images.



2020 ◽  
pp. 1027-1038
Author(s):  
Jonas Scherer ◽  
Marco Nolden ◽  
Jens Kleesiek ◽  
Jasmin Metzger ◽  
Klaus Kades ◽  
...  

PURPOSE Image analysis is one of the most promising applications of artificial intelligence (AI) in health care, potentially improving prediction, diagnosis, and treatment of diseases. Although scientific advances in this area critically depend on the accessibility of large-volume and high-quality data, sharing data between institutions faces various ethical and legal constraints as well as organizational and technical obstacles. METHODS The Joint Imaging Platform (JIP) of the German Cancer Consortium (DKTK) addresses these issues by providing federated data analysis technology in a secure and compliant way. Using the JIP, medical image data remain in the originator institutions, but analysis and AI algorithms are shared and jointly used. Common standards and interfaces to local systems ensure permanent data sovereignty of participating institutions. RESULTS The JIP is established in the radiology and nuclear medicine departments of 10 university hospitals in Germany (DKTK partner sites). In multiple complementary use cases, we show that the platform fulfills all relevant requirements to serve as a foundation for multicenter medical imaging trials and research on large cohorts, including the harmonization and integration of data, interactive analysis, automatic analysis, federated machine learning, and extensibility and maintenance processes, which are elementary for the sustainability of such a platform. CONCLUSION The results demonstrate the feasibility of using the JIP as a federated data analytics platform in heterogeneous clinical information technology and software landscapes, solving an important bottleneck for the application of AI to large-scale clinical imaging data.



2014 ◽  
pp. 48-52
Author(s):  
Tarek K. Alameldin ◽  
Norman Badler ◽  
Tarek Sobh ◽  
Raul Mihali

An efficient computation of 3D workspaces for redundant manipulators is based on a “hybrid” algorithm between direct kinematics and screw theory. Direct kinematics enjoys low computational cost, but needs edge detection algorithms when workspace boundaries are needed. Screw theory has exponential computational cost per workspace point, but does not need edge detection. Screw theory allows computing workspace points in prespecified directions, while direct kinematics does not. Applications of the algorithm are discussed.



2014 ◽  
Vol 631-632 ◽  
pp. 631-635
Author(s):  
Yi Ting Wang ◽  
Shi Qi Huang ◽  
Hong Xia Wang ◽  
Dai Zhi Liu

Hyperspectral remote sensing technology can be used to make a correct spectral diagnosis on substances. So it is widely used in the field of target detection and recognition. However, it is very difficult to gather accurate prior information for target detect since the spectral uncertainty of objects is pervasive in existence. An anomaly detector can enable one to detect targets whose signatures are spectrally distinct from their surroundings with no prior knowledge. It becomes a focus in the field of target detection. Therefore, we study four anomaly detection algorithms and conclude with empirical results that use hyperspectral imaging data to illustrate the operation and performance of various detectors.



2021 ◽  
Vol 15 ◽  
Author(s):  
Yifan Dai ◽  
Hideaki Yamamoto ◽  
Masao Sakuraba ◽  
Shigeo Sato

Liquid state machine (LSM) is a type of recurrent spiking network with a strong relationship to neurophysiology and has achieved great success in time series processing. However, the computational cost of simulations and complex dynamics with time dependency limit the size and functionality of LSMs. This paper presents a large-scale bioinspired LSM with modular topology. We integrate the findings on the visual cortex that specifically designed input synapses can fit the activation of the real cortex and perform the Hough transform, a feature extraction algorithm used in digital image processing, without additional cost. We experimentally verify that such a combination can significantly improve the network functionality. The network performance is evaluated using the MNIST dataset where the image data are encoded into spiking series by Poisson coding. We show that the proposed structure can not only significantly reduce the computational complexity but also achieve higher performance compared to the structure of previous reported networks of a similar size. We also show that the proposed structure has better robustness against system damage than the small-world and random structures. We believe that the proposed computationally efficient method can greatly contribute to future applications of reservoir computing.



2020 ◽  
Author(s):  
Darian Hadjiabadi ◽  
Matthew Lovett-Barron ◽  
Ivan Raikov ◽  
Fraser Sparks ◽  
Zhenrui Liao ◽  
...  

AbstractNeurological and psychiatric disorders are associated with pathological neural dynamics. The fundamental connectivity patterns of cell-cell communication networks that enable pathological dynamics to emerge remain unknown. We studied epileptic circuits using a newly developed integrated computational pipeline applied to cellular resolution functional imaging data. Control and preseizure neural dynamics in larval zebrafish and in chronically epileptic mice were captured using large-scale cellular-resolution calcium imaging. Biologically constrained effective connectivity modeling extracted the underlying cell-cell communication network. Novel analysis of the higher-order network structure revealed the existence of ‘superhub’ cells that are unusually richly connected to the rest of the network through feedforward motifs. Instability in epileptic networks was causally linked to superhubs whose involvement in feedforward motifs critically enhanced downstream excitation. Disconnecting individual superhubs was significantly more effective in stabilizing epileptic networks compared to disconnecting hub cells defined traditionally by connection count. Collectively, these results predict a new, maximally selective and minimally invasive cellular target for seizure control.HighlightsHigher-order connectivity patterns of large-scale neuronal communication networks were studied in zebrafish and miceControl and epileptic networks were modeled from in vivo cellular resolution calcium imaging dataRare ‘superhub’ cells unusually richly connected to the rest of the network through higher-order feedforward motifs were identifiedDisconnecting single superhub neurons more effectively stabilized epileptic networks than targeting conventional hub cells defined by high connection count.These data predict a maximally selective novel single cell target for minimally invasive seizure control



2021 ◽  
Vol 14 (13) ◽  
pp. 3420-3420
Author(s):  
Matei Zaharia

Building production ML applications is difficult because of their resource cost and complex failure modes. I will discuss these challenges from two perspectives: the Stanford DAWN Lab and experience with large-scale commercial ML users at Databricks. I will then present two emerging ideas to help address these challenges. The first is "ML platforms", an emerging class of software systems that standardize the interfaces used in ML applications to make them easier to build and maintain. I will give a few examples, including the open-source MLflow system from Databricks [3]. The second idea is models that are more "production-friendly" by design. As a concrete example, I will discuss retrieval-based NLP models such as Stanford's ColBERT [1, 2] that query documents from an updateable corpus to perform tasks such as question-answering, which gives multiple practical advantages, including low computational cost, high interpretability, and very fast updates to the model's "knowledge". These models are an exciting alternative to large language models such as GPT-3.



HFSP Journal ◽  
2010 ◽  
Vol 4 (1) ◽  
pp. 1-5 ◽  
Author(s):  
Alex C. Kwan


Sign in / Sign up

Export Citation Format

Share Document