scholarly journals Recut: a Concurrent Framework for Sparse Reconstruction of Neuronal Morphology

2021 ◽  
Author(s):  
Karl Marrett ◽  
Muye Zhu ◽  
Yuze Chi ◽  
Zhe Chen ◽  
Chris Choi ◽  
...  

Interpreting the influx of microscopy and neuroimaging data is bottlenecked by neuronal reconstruction's long-standing issues in accuracy, automation, and scalability. Rapidly increasing data size is particularly concerning for modern computing infrastructure due to the wall in memory bandwidth which historically has witnessed the slowest rate of technological advancement. Recut is an end to end reconstruction pipeline that takes raw large-volume light microscopy images and yields filtered or tuned automated neuronal reconstructions that require minimal proofreading and no other manual intervention. By leveraging adaptive grids and other methods, Recut also has a unified data representation with up to a 509× reduction in memory footprint resulting in an 89.5× throughput increase and enabling an effective 64× increase in the scale of volumes that can be skeletonized on servers or resource limited devices. Recut also employs coarse and fine-grained parallelism to achieve speedup factors beyond CPU core count in sparse settings when compared to the current fastest reconstruction method. By leveraging the sparsity in light microscopy datasets, this can allow full brains to be processed in-memory, a property which may significantly shift the compute needs of the neuroimaging community. The scale and speed of Recut fundamentally changes the reconstruction process, allowing an interactive yet deeply automated workflow.

2007 ◽  
Vol 19 (04) ◽  
pp. 239-249
Author(s):  
Wei-Min Jeng ◽  
Yu-Liang Hsu

Most of the recent medical imaging modalities use noninvasive means to obtain the activity information inside human organs, so doctors may detect the initial symptoms of a disease as early as possible and give appropriate treatment. PET is to use radio-isotopes which can emit positrons in its clinical and research uses. By injecting the nuclear medicine drug formed by labeling a radioelement to molecules of deoxidized glucose to a patient and after the cells in the patient body absorbing it through metabolic functions, the detectors will receive annihilation coincidence events formed from the number of the positrons generated from the response of the labeled deoxidized glucose molecules. The most critical module of the modality therefore is the procedure regarding how to reconstruct good quality images using the collected projection information. However, in the reconstruction process, MLEM involves massive data in considerable number of iterations in order to yield accurate images and takes quite a long time in computation. Ordered Subsets Expectation Maximization (OSEM) was proposed to accelerate the reconstruction process by expediting the convergence while maintaining the same image quality as those produced by MLEM. Since then, OSEM iterative algorithm has become the de facto reconstruction method adopted by most PET installations. To further improve the image quality, both clinical and research data have been acquired in 3D mode on the majority of the current systems. The accompanied computational load of iterative reconstruction increases considerably resulting from the 3D OSEM method. Attributed to the fact of the recent technological advancement, many high-performance parallel methods have been proposed to speed up the reconstruction process. These methods in general are to partition data into several sets before applying any parallel acceleration. They do not take on the nature of OSEM method to identify the intrinsic data dependencies. This project intends to analyze the iterative natures of the 3D OSEM method, particularly the intra- and inter-iteration aspects of the reconstruction method along with the latest shared-memory parallel machine architecture. Experiments will be conducted to demonstrate its superior performance over the existing methods.


2021 ◽  
pp. 1-13
Author(s):  
Yikai Zhang ◽  
Yong Peng ◽  
Hongyu Bian ◽  
Yuan Ge ◽  
Feiwei Qin ◽  
...  

Concept factorization (CF) is an effective matrix factorization model which has been widely used in many applications. In CF, the linear combination of data points serves as the dictionary based on which CF can be performed in both the original feature space as well as the reproducible kernel Hilbert space (RKHS). The conventional CF treats each dimension of the feature vector equally during the data reconstruction process, which might violate the common sense that different features have different discriminative abilities and therefore contribute differently in pattern recognition. In this paper, we introduce an auto-weighting variable into the conventional CF objective function to adaptively learn the corresponding contributions of different features and propose a new model termed Auto-Weighted Concept Factorization (AWCF). In AWCF, on one hand, the feature importance can be quantitatively measured by the auto-weighting variable in which the features with better discriminative abilities are assigned larger weights; on the other hand, we can obtain more efficient data representation to depict its semantic information. The detailed optimization procedure to AWCF objective function is derived whose complexity and convergence are also analyzed. Experiments are conducted on both synthetic and representative benchmark data sets and the clustering results demonstrate the effectiveness of AWCF in comparison with the related models.


PLoS ONE ◽  
2013 ◽  
Vol 8 (12) ◽  
pp. e84557 ◽  
Author(s):  
Xing Ming ◽  
Anan Li ◽  
Jingpeng Wu ◽  
Cheng Yan ◽  
Wenxiang Ding ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Y. Zhang ◽  
B. P. Wang ◽  
Y. Fang ◽  
Z. X. Song

The existing sparse imaging observation error estimation methods are to usually estimate the error of each observation position by substituting the error parameters into the iterative reconstruction process, which has a huge calculation cost. In this paper, by analysing the relationship between imaging results of single-observation sampling data and error parameters, a SAR observation error estimation method based on maximum relative projection matching is proposed. First, the method estimates the precise position parameters of the reference position by the sparse reconstruction method of joint error parameters. Second, a relative error estimation model is constructed based on the maximum correlation of base-space projection. Finally, the accurate error parameters are estimated by the Broyden–Fletcher–Goldfarb–Shanno method. Simulation and measured data of microwave anechoic chambers show that, compared to the existing methods, the proposed method has higher estimation accuracy, lower noise sensitivity, and higher computational efficiency.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Mingsheng Cao ◽  
Luhan Wang ◽  
Zhiguang Qin ◽  
Chunwei Lou

The wireless body area networks (WBANs) have emerged as a highly promising technology that allows patients’ demographics to be collected by tiny wearable and implantable sensors. These data can be used to analyze and diagnose to improve the healthcare quality of patients. However, security and privacy preserving of the collected data is a major challenge on resource-limited WBANs devices and the urgent need for fine-grained search and lightweight access. To resolve these issues, in this paper, we propose a lightweight fine-grained search over encrypted data in WBANs by employing ciphertext policy attribute based encryption and searchable encryption technologies, of which the proposed scheme can provide resource-constraint end users with fine-grained keyword search and lightweight access simultaneously. We also formally define its security and prove that it is secure against both chosen plaintext attack and chosen keyword attack. Finally, we make a performance evaluation to demonstrate that our scheme is much more efficient and practical than the other related schemes, which makes the scheme more suitable for the real-world applications.


2010 ◽  
Vol 143-144 ◽  
pp. 768-772
Author(s):  
Shao Yan Gai ◽  
Fei Peng Da

A surface reconstruction method for material shape analysis is presented. The three-dimensional shape reconstruction system detects object surface based on optical principle. A series of gratings are projected to the object, and the projected gratings are deformed by the object surface. From images of the deformed gratings, three-dimensional profile of the material surface can be obtained. The basic aspects of the method are discussed, including the vision geometry, the light projection and code principle. The proposed method can deal with objects with various discontinuities on the material surface, thus increasing the flexibility and robustness of shape reconstruction process. The experimental results show the efficiency of the method, the material surface can be reconstructed with high precision in various applications.


Author(s):  
Kristin M. Torre ◽  
Michael J. Murphy ◽  
Jane M. Grant-Kels

Technological advancement is steadily reshaping the field of medical education. In histopathology and especially dermatopathology training, the transition from glass slide microscopy (GSM) to virtual microscopy (VM) is serving as an instructional tool for medical students, residents, fellows, and experienced physicians. Online slide atlases and digitalized content are being utilized by educators and trainees to enhance and assess both individual and collaborative learning. With the expansion of mobile technology, new avenues are emerging for image attainment, in addition to remote instruction and consultation in resource-limited areas. Various computer-based applications (“apps”) and social media sites also serve as digital assets in education and training and allow for rapid dissemination and sharing of information around the world.


2014 ◽  
Vol 22 (04) ◽  
pp. 1450011 ◽  
Author(s):  
Gang Ye ◽  
Chunhua Deng ◽  
Qing Huo Liu

The thermoacoustic tomography (TAT) is a novel noninvasive and nonionizing medical imaging modality for breast cancer detection. In the TAT, a short pulse of microwave is irradiated to the breast tissue. The tissue absorbs the microwave energy and is heated up momentarily, thus it generates acoustic waves due to the thermoelastic expansion. If the pulse width of the microwave radiation is around one microsecond, the generated acoustic waves are ultrasonic and are in the MHz range. Wide-band ultrasonic transducers are employed to acquire the time-resolved ultrasound signals, which carry information about the microwave absorption properties (mainly related to conductivities) of different tissues. An image showing such properties can then be reconstructed from the time-resolved ultrasound signals. Most existing TAT reconstruction methods are based on the assumption that the tissue under study is acoustically homogeneous. In practice, however, most biological tissues are inhomogeneous. For example, the speed of sound has about 10% variation in breast tissue. The acoustic heterogeneity will cause phase distortion of the pressure field, which will in turn cause blurring in the reconstructed image, thus limiting the ability to resolve small objects. In this work, a 3D inhomogeneous reconstruction method based on pseudo-spectral time-domain (PSTD) is presented to overcome this problem. The method includes two steps. The first step is a homogeneous reconstruction process, from which an initial image is obtained. Since the inhomogeneity itself is usually an acoustic source, the shape and location of the inhomogeneity can be estimated. Then, the acoustic properties of the inhomogeneities (available from the literatures for known tissue types) are assigned to the classified regions, and the other reconstruction based on the updated acoustic property map is conducted. With this process, the phase distortion can be effectively corrected. So it can improve the ability to image small objects. A 3D breast phantom is used to study the proposed method. The breast phantom was generated based on the data set of the Visible Human Project. Regions of different tissue types have been classified and acoustic and electric properties are assigned to such regions. Small phantom tumors placed in the breast phantom have been reconstructed successfully with the inhomogeneous reconstruction method. Improved resolution has been achieved compared to that obtained by homogeneous method.


Author(s):  
Ji Ma ◽  
Hsi-Yung Feng ◽  
Lihui Wang

Automatic and reliable reconstruction of sharp features remains an open research issue in triangle mesh surface reconstruction. This paper presents a new feature sensitive mesh reconstruction method based on dependable neighborhood geometric information per input point. Such information is derived from the matching result of the local umbrella mesh constructed at each point. The proposed algorithm is different from the existing post-processing algorithms. The proposed algorithm reconstructs the triangle mesh via an integrated and progressive reconstruction process and features a unified multi-level inheritance priority queuing mechanism to prioritize the inclusion of each candidate triangle. A novel flatness sensitive filter, referred to as the normal vector cone filter, is introduced in this work and used to reliably reconstruct sharp features. In addition, the proposed algorithm aims to reconstruct a watertight manifold triangle mesh that passes through the complete original point set without point addition and removal. The algorithm has been implemented and validated using publicly available point cloud data sets. Compared to the original object geometry, it is seen that the reconstructed triangle meshes preserve the sharp features well and only contain minor shape deviations.


Publications ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 13 ◽  
Author(s):  
Afshin Sadeghi ◽  
Sarven Capadisli ◽  
Johannes Wilm ◽  
Christoph Lange ◽  
Philipp Mayr

An increasing number of scientific publications are created in open and transparent peer review models: a submission is published first, and then reviewers are invited, or a submission is reviewed in a closed environment but then these reviews are published with the final article, or combinations of these. Reasons for open peer review include giving better credit to reviewers, and enabling readers to better appraise the quality of a publication. In most cases, the full, unstructured text of an open review is published next to the full, unstructured text of the article reviewed. This approach prevents human readers from getting a quick impression of the quality of parts of an article, and it does not easily support secondary exploitation, e.g., for scientometrics on reviews. While document formats have been proposed for publishing structured articles including reviews, integrated tool support for entire open peer review workflows resulting in such documents is still scarce. We present AR-Annotator, the Automatic Article and Review Annotator which employs a semantic information model of an article and its reviews, using semantic markup and unique identifiers for all entities of interest. The fine-grained article structure is not only exposed to authors and reviewers but also preserved in the published version. We publish articles and their reviews in a Linked Data representation and thus maximise their reusability by third party applications. We demonstrate this reusability by running quality-related queries against the structured representation of articles and their reviews.


Sign in / Sign up

Export Citation Format

Share Document