scholarly journals Reconstruction for Time-Domain In Vivo EPR 3D Multigradient Oximetric Imaging—A Parallel Processing Perspective

2009 ◽  
Vol 2009 ◽  
pp. 1-12 ◽  
Author(s):  
Christopher D. Dharmaraj ◽  
Kishan Thadikonda ◽  
Anthony R. Fletcher ◽  
Phuc N. Doan ◽  
Nallathamby Devasahayam ◽  
...  

Three-dimensional Oximetric Electron Paramagnetic Resonance Imaging using the Single Point Imaging modality generates unpaired spin density and oxygen images that can readily distinguish between normal and tumor tissues in small animals. It is also possible with fast imaging to track the changes in tissue oxygenation in response to the oxygen content in the breathing air. However, this involves dealing with gigabytes of data for each 3D oximetric imaging experiment involving digital band pass filtering and background noise subtraction, followed by 3D Fourier reconstruction. This process is rather slow in a conventional uniprocessor system. This paper presents a parallelization framework using OpenMP runtime support and parallel MATLAB to execute such computationally intensive programs. The Intel compiler is used to develop a parallel C++ code based on OpenMP. The code is executed on four Dual-Core AMD Opteron shared memory processors, to reduce the computational burden of the filtration task significantly. The results show that the parallel code for filtration has achieved a speed up factor of 46.66 as against the equivalent serial MATLAB code. In addition, a parallel MATLAB code has been developed to perform 3D Fourier reconstruction. Speedup factors of 4.57 and 4.25 have been achieved during the reconstruction process and oximetry computation, for a data set with23×23×23gradient steps. The execution time has been computed for both the serial and parallel implementations using different dimensions of the data and presented for comparison. The reported system has been designed to be easily accessible even from low-cost personal computers through local internet (NIHnet). The experimental results demonstrate that the parallel computing provides a source of high computational power to obtain biophysical parameters from 3D EPR oximetric imaging, almost in real-time.

2021 ◽  
Author(s):  
Connor James Darling ◽  
Samuel P.X. Davis ◽  
Sunil Kumar ◽  
Paul M.W. French ◽  
James A McGinty

We present a single-shot adaptation of Optical Projection Tomography (OPT) for high-speed volumetric snapshot imaging of dynamic mesoscopic samples. Conventional OPT has been applied to in vivo imaging of animal models such as D. rerio but the sequential acquisition of projection images required for volumetric reconstruction typically requires samples to be immobilised during the acquisition of an OPT data set. We present a proof-of-principle system capable of single-shot imaging of a 1 mm diameter volume, demonstrating camera-limited rates of up to 62.5 volumes/second, which we have applied to 3D imaging of a freely-swimming zebrafish embryo. This is achieved by recording 8 projection views simultaneously on 4 low-cost CMOS cameras. With no stage required to rotate the sample, this single-shot OPT system can be implemented with a component cost of under 5,000GBP. The system design can be adapted to different sized fields of view and may be applied to a broad range of dynamic samples, including fluid dynamics.


2021 ◽  
Vol 7 (2) ◽  
pp. 247-250
Author(s):  
Amr Abuzer ◽  
Ady Naber ◽  
Simon Hoffmann ◽  
Lucy Kessler ◽  
Ramin Khoramnia ◽  
...  

Abstract Optical Coherence Tomography Angiography (OCTA) is an imaging modality that provides threedimensional information of the retinal microvasculature and therefore promises early diagnosis and sufficient monitoring in ophthalmology. However, there is considerable variability between experts analysing this data. Measures for quantitative assessment of the vasculature need to be developed and established, such as fractal dimension. Fractal dimension can be used to assess the complexity of vessels and has been shown to be independently associated with neovascularization, a symptom of diseases such as diabetic retinopathy. This investigation assessed the performance of three fractal dimension algorithms: Box Counting Dimension (BCD), Information Dimension (ID), and Differential Box Counting (DBC). Two of those, BCD and ID, rely on previous vessel segmentation. Assessment of the added value or disturbance regarding the segmentation step is a second aim of this study. The investigation was performed on a data set composed of 9 in vivo human eyes. Since there is no ground truth available, the performance of the methods in differentiating the Superficial Vascular Complex (SVC) and Deep Vascular Complex (DVC) layers apart and the consistency of measurements of the same layer at different time-points were tested. The performance parameters were the ICC and the Mann- Whitney U tests. The three applied methods were suitable to tell the different layers apart and showed consistent values applied in the same slab. Within the consistency test, the non-segmentation-based method, DBC, was found to be less accurate, expressed in a lower ICC value, compared to its segmentation-based counterparts. This result is thought to be due to the DBC’s higher sensitivity when compared to the other methods. This higher sensitivity might help detect changes in the microvasculature, like neovascularization, but is also more likely prone to noise and artefacts.


Author(s):  
M. Amooshahi ◽  
A. Samani

Breast elastography has been proposed as a novel imaging modality for breast cancer detection and assessment. As pathologies are known to change tissue stiffness significantly, the idea behind elastography is using tissue stiffness as imaging contrast agent. Evidence in the literature suggests that various pathological tissues exhibit different mechanical stiffness characteristics. Therefore, in addition to the ability of detecting the presence of abnormalities, elastography is capable of pathological tissue classification. In this work, we propose a novel nonlinear (hyperelastic) breast elastography system which takes into account tissue large deformations resulting from mechanical stimulation. To idealize breast tissue, we use the well-known Veronda-Westman model as the forward problem solution in the hyperelastic parameter reconstruction process. This process involves tissue mechanical stimulation, displacement data acquisition followed by solving an inverse problem to find the hyperelastic parameters iteratively. These parameters are useful for in vivo tumor classification, image guided surgery and Virtual Reality systems development. Due to the exponential form of the Veronda-Westman function, however, this model cannot be solved using inverse-matrix techniques. Therefore, we have developed a novel technique to solve the corresponding nonlinear inverse problem. To validate the technique, we used an experimental breast tissue mimicking phantom that was made up of PVA-C (Polyvinyl Alcohol), which exhibits nonlinear mechanical behavior. Displacement data was acquired using a combination of Time Domain Cross-Correlation Estimation (TDE) and Horn-Schunck Optical Flow techniques.


Author(s):  
R.J. Mount ◽  
R.V. Harrison

The sensory end organ of the ear, the organ of Corti, rests on a thin basilar membrane which lies between the bone of the central modiolus and the bony wall of the cochlea. In vivo, the organ of Corti is protected by the bony wall which totally surrounds it. In order to examine the sensory epithelium by scanning electron microscopy it is necessary to dissect away the protective bone and expose the region of interest (Fig. 1). This leaves the fragile organ of Corti susceptible to physical damage during subsequent handling. In our laboratory cochlear specimens, after dissection, are routinely prepared by the O-T- O-T-O technique, critical point dried and then lightly sputter coated with gold. This processing involves considerable specimen handling including several hours on a rotator during which the organ of Corti is at risk of being physically damaged. The following procedure uses low cost, readily available materials to hold the specimen during processing ,preventing physical damage while allowing an unhindered exchange of fluids.Following fixation, the cochlea is dehydrated to 70% ethanol then dissected under ethanol to prevent air drying. The holder is prepared by punching a hole in the flexible snap cap of a Wheaton vial with a paper hole punch. A small amount of two component epoxy putty is well mixed then pushed through the hole in the cap. The putty on the inner cap is formed into a “cup” to hold the specimen (Fig. 2), the putty on the outside is smoothed into a “button” to give good attachment even when the cap is flexed during handling (Fig. 3). The cap is submerged in the 70% ethanol, the bone at the base of the cochlea is seated into the cup and the sides of the cup squeezed with forceps to grip it (Fig.4). Several types of epoxy putty have been tried, most are either soluble in ethanol to some degree or do not set in ethanol. The only putty we find successful is “DUROtm MASTERMENDtm Epoxy Extra Strength Ribbon” (Loctite Corp., Cleveland, Ohio), this is a blue and yellow ribbon which is kneaded to form a green putty, it is available at many hardware stores.


2018 ◽  
Author(s):  
Peter De Wolf ◽  
Zhuangqun Huang ◽  
Bede Pittenger

Abstract Methods are available to measure conductivity, charge, surface potential, carrier density, piezo-electric and other electrical properties with nanometer scale resolution. One of these methods, scanning microwave impedance microscopy (sMIM), has gained interest due to its capability to measure the full impedance (capacitance and resistive part) with high sensitivity and high spatial resolution. This paper introduces a novel data-cube approach that combines sMIM imaging and sMIM point spectroscopy, producing an integrated and complete 3D data set. This approach replaces the subjective approach of guessing locations of interest (for single point spectroscopy) with a big data approach resulting in higher dimensional data that can be sliced along any axis or plane and is conducive to principal component analysis or other machine learning approaches to data reduction. The data-cube approach is also applicable to other AFM-based electrical characterization modes.


2021 ◽  
Vol 7 (2) ◽  
pp. 356-362
Author(s):  
Harry Coppock ◽  
Alex Gaskell ◽  
Panagiotis Tzirakis ◽  
Alice Baird ◽  
Lyn Jones ◽  
...  

BackgroundSince the emergence of COVID-19 in December 2019, multidisciplinary research teams have wrestled with how best to control the pandemic in light of its considerable physical, psychological and economic damage. Mass testing has been advocated as a potential remedy; however, mass testing using physical tests is a costly and hard-to-scale solution.MethodsThis study demonstrates the feasibility of an alternative form of COVID-19 detection, harnessing digital technology through the use of audio biomarkers and deep learning. Specifically, we show that a deep neural network based model can be trained to detect symptomatic and asymptomatic COVID-19 cases using breath and cough audio recordings.ResultsOur model, a custom convolutional neural network, demonstrates strong empirical performance on a data set consisting of 355 crowdsourced participants, achieving an area under the curve of the receiver operating characteristics of 0.846 on the task of COVID-19 classification.ConclusionThis study offers a proof of concept for diagnosing COVID-19 using cough and breath audio signals and motivates a comprehensive follow-up research study on a wider data sample, given the evident advantages of a low-cost, highly scalable digital COVID-19 diagnostic tool.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Woo Seok Kim ◽  
Sungcheol Hong ◽  
Milenka Gamero ◽  
Vivekanand Jeevakumar ◽  
Clay M. Smithhart ◽  
...  

AbstractThe vagus nerve supports diverse autonomic functions and behaviors important for health and survival. To understand how specific components of the vagus contribute to behaviors and long-term physiological effects, it is critical to modulate their activity with anatomical specificity in awake, freely behaving conditions using reliable methods. Here, we introduce an organ-specific scalable, multimodal, wireless optoelectronic device for precise and chronic optogenetic manipulations in vivo. When combined with an advanced, coil-antenna system and a multiplexing strategy for powering 8 individual homecages using a single RF transmitter, the proposed wireless telemetry enables low cost, high-throughput, and precise functional mapping of peripheral neural circuits, including long-term behavioral and physiological measurements. Deployment of these technologies reveals an unexpected role for stomach, non-stretch vagal sensory fibers in suppressing appetite and demonstrates the durability of the miniature wireless device inside harsh gastric conditions.


Neurosurgery ◽  
2012 ◽  
Vol 72 (3) ◽  
pp. 353-366 ◽  
Author(s):  
Francesco Cardinale ◽  
Massimo Cossu ◽  
Laura Castana ◽  
Giuseppe Casaceli ◽  
Marco Paolo Schiariti ◽  
...  

Abstract BACKGROUND: Stereoelectroencephalography (SEEG) methodology, originally developed by Talairach and Bancaud, is progressively gaining popularity for the presurgical invasive evaluation of drug-resistant epilepsies. OBJECTIVE: To describe recent SEEG methodological implementations carried out in our center, to evaluate safety, and to analyze in vivo application accuracy in a consecutive series of 500 procedures with a total of 6496 implanted electrodes. METHODS: Four hundred nineteen procedures were performed with the traditional 2-step surgical workflow, which was modified for the subsequent 81 procedures. The new workflow entailed acquisition of brain 3-dimensional angiography and magnetic resonance imaging in frameless and markerless conditions, advanced multimodal planning, and robot-assisted implantation. Quantitative analysis for in vivo entry point and target point localization error was performed on a sub-data set of 118 procedures (1567 electrodes). RESULTS: The methodology allowed successful implantation in all cases. Major complication rate was 12 of 500 (2.4%), including 1 death for indirect morbidity. Median entry point localization error was 1.43 mm (interquartile range, 0.91-2.21 mm) with the traditional workflow and 0.78 mm (interquartile range, 0.49-1.08 mm) with the new one (P < 2.2 × 10−16). Median target point localization errors were 2.69 mm (interquartile range, 1.89-3.67 mm) and 1.77 mm (interquartile range, 1.25-2.51 mm; P < 2.2 × 10−16), respectively. CONCLUSION: SEEG is a safe and accurate procedure for the invasive assessment of the epileptogenic zone. Traditional Talairach methodology, implemented by multimodal planning and robot-assisted surgery, allows direct electrical recording from superficial and deep-seated brain structures, providing essential information in the most complex cases of drug-resistant epilepsy.


Author(s):  
Laura Wienands ◽  
Franziska Theiß ◽  
James Eills ◽  
Lorenz Rösler ◽  
Stephan Knecht ◽  
...  

AbstractParahydrogen-induced polarization is a hyperpolarization method for enhancing nuclear magnetic resonance signals by chemical reactions/interactions involving the para spin isomer of hydrogen gas. This method has allowed for biomolecules to be hyperpolarized to such a level that they can be used for real time in vivo metabolic imaging. One particularly promising example is fumarate, which can be rapidly and efficiently hyperpolarized at low cost by hydrogenating an acetylene dicarboxylate precursor molecule using parahydrogen. The reaction is relatively slow compared to the timescale on which the hyperpolarization relaxes back to thermal equilibrium, and an undesirable 2nd hydrogenation step can convert the fumarate into succinate. To date, the hydrogenation chemistry has not been thoroughly investigated, so previous work has been inconsistent in the chosen reaction conditions in the search for ever-higher reaction rate and yield. In this work we investigate the solution preparation protocols and the reaction conditions on the rate and yield of fumarate formation. We report conditions to reproducibly yield over 100 mM fumarate on a short timescale, and discuss aspects of the protocol that hinder the formation of fumarate or lead to irreproducible results. We also provide experimental procedures and recommendations for performing reproducible kinetics experiments in which hydrogen gas is repeatedly bubbled into an aqueous solution, overcoming challenges related to the viscosity and surface tension of the water.


2020 ◽  
Author(s):  
Joost van Haasteren ◽  
Altar M Munis ◽  
Deborah R Gill ◽  
Stephen C Hyde

Abstract The gene and cell therapy fields are advancing rapidly, with a potential to treat and cure a wide range of diseases, and lentivirus-based gene transfer agents are the vector of choice for many investigators. Early cases of insertional mutagenesis caused by gammaretroviral vectors highlighted that integration site (IS) analysis was a major safety and quality control checkpoint for lentiviral applications. The methods established to detect lentiviral integrations using next-generation sequencing (NGS) are limited by short read length, inadvertent PCR bias, low yield, or lengthy protocols. Here, we describe a new method to sequence IS using Amplification-free Integration Site sequencing (AFIS-Seq). AFIS-Seq is based on amplification-free, Cas9-mediated enrichment of high-molecular-weight chromosomal DNA suitable for long-range Nanopore MinION sequencing. This accessible and low-cost approach generates long reads enabling IS mapping with high certainty within a single day. We demonstrate proof-of-concept by mapping IS of lentiviral vectors in a variety of cell models and report up to 1600-fold enrichment of the signal. This method can be further extended to sequencing of Cas9-mediated integration of genes and to in vivo analysis of IS. AFIS-Seq uses long-read sequencing to facilitate safety evaluation of preclinical lentiviral vector gene therapies by providing IS analysis with improved confidence.


Sign in / Sign up

Export Citation Format

Share Document