scholarly journals Rapid Parameter Estimation for Selective Inversion Recovery Myelin Imaging using an Open-Source Julia Toolkit

2021 ◽  
Author(s):  
Nicholas J Sisco ◽  
Ping Wang ◽  
Ashley M Stokes ◽  
Richard D Dortch

Background: Magnetic resonance imaging (MRI) is used extensively to quantify myelin content, however computational bottlenecks remain challenging for advanced imaging techniques in clinical settings. We present a fast, open-source toolkit for processing quantitative magnetization transfer derived from selective inversion recovery (SIR) acquisitions that allows parameter map estimation, including the myelin-sensitive macromolecular pool size ratio (PSR). Significant progress has been made in reducing SIR acquisition times to improve clinically feasibility. However, parameter map estimation from the resulting data remains computationally expensive. To overcome this computational limitation, we developed a computationally efficient, open-source toolkit implemented in the Julia language. Methods: To test the accuracy of this toolkit, we simulated SIR images with varying PSR and spin-lattice relaxation time of the free water pool (R1f) over a physiologically meaningful scale from 5 to 20% and 0.5 to 1.5 s-1, respectively. Rician noise was then added, and the parameter maps were estimated using our Julia toolkit. Probability density histogram plots and Lin's concordance correlation coefficients (LCCC) were used to assess accuracy and precision of the fits to our known simulation data. To further mimic biological tissue, we generated five cross-linked bovine serum albumin (BSA) phantoms with concentrations that ranged from 1.25 to 20%. The phantoms were imaged at 3T using SIR, and data were fit to estimate PSR and R1f. Similarly, a healthy volunteer was imaged at 3T, and SIR parameter maps were estimated to demonstrate the reduced computational time for a real-world clinical example. Results: Estimated SIR parameter maps from our Julia toolkit agreed with simulated values (LCCC> 0.98). This toolkit was further validated using BSA phantoms and a whole brain scan at 3T. In both cases, SIR parameter estimates were consistent with published values using MATLAB. However, compared to earlier work using MATLAB, our Julia toolkit provided an approximate 20-fold reduction in computational time. Conclusions: Presented here, we developed a fast, open-source, toolkit for rapid and accurate SIR MRI using Julia. The reduction in computational cost should allow SIR parameters to be accessible in clinical settings.

Author(s):  
Tu Huynh-Kha ◽  
Thuong Le-Tien ◽  
Synh Ha ◽  
Khoa Huynh-Van

This research work develops a new method to detect the forgery in image by combining the Wavelet transform and modified Zernike Moments (MZMs) in which the features are defined from more pixels than in traditional Zernike Moments. The tested image is firstly converted to grayscale and applied one level Discrete Wavelet Transform (DWT) to reduce the size of image by a half in both sides. The approximation sub-band (LL), which is used for processing, is then divided into overlapping blocks and modified Zernike moments are calculated in each block as feature vectors. More pixels are considered, more sufficient features are extracted. Lexicographical sorting and correlation coefficients computation on feature vectors are next steps to find the similar blocks. The purpose of applying DWT to reduce the dimension of the image before using Zernike moments with updated coefficients is to improve the computational time and increase exactness in detection. Copied or duplicated parts will be detected as traces of copy-move forgery manipulation based on a threshold of correlation coefficients and confirmed exactly from the constraint of Euclidean distance. Comparisons results between proposed method and related ones prove the feasibility and efficiency of the proposed algorithm.


SLEEP ◽  
2020 ◽  
Author(s):  
Luca Menghini ◽  
Nicola Cellini ◽  
Aimee Goldstone ◽  
Fiona C Baker ◽  
Massimiliano de Zambotti

Abstract Sleep-tracking devices, particularly within the consumer sleep technology (CST) space, are increasingly used in both research and clinical settings, providing new opportunities for large-scale data collection in highly ecological conditions. Due to the fast pace of the CST industry combined with the lack of a standardized framework to evaluate the performance of sleep trackers, their accuracy and reliability in measuring sleep remains largely unknown. Here, we provide a step-by-step analytical framework for evaluating the performance of sleep trackers (including standard actigraphy), as compared with gold-standard polysomnography (PSG) or other reference methods. The analytical guidelines are based on recent recommendations for evaluating and using CST from our group and others (de Zambotti and colleagues; Depner and colleagues), and include raw data organization as well as critical analytical procedures, including discrepancy analysis, Bland–Altman plots, and epoch-by-epoch analysis. Analytical steps are accompanied by open-source R functions (depicted at https://sri-human-sleep.github.io/sleep-trackers-performance/AnalyticalPipeline_v1.0.0.html). In addition, an empirical sample dataset is used to describe and discuss the main outcomes of the proposed pipeline. The guidelines and the accompanying functions are aimed at standardizing the testing of CSTs performance, to not only increase the replicability of validation studies, but also to provide ready-to-use tools to researchers and clinicians. All in all, this work can help to increase the efficiency, interpretation, and quality of validation studies, and to improve the informed adoption of CST in research and clinical settings.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Israel F. Araujo ◽  
Daniel K. Park ◽  
Francesco Petruccione ◽  
Adenilton J. da Silva

AbstractAdvantages in several fields of research and industry are expected with the rise of quantum computers. However, the computational cost to load classical data in quantum computers can impose restrictions on possible quantum speedups. Known algorithms to create arbitrary quantum states require quantum circuits with depth O(N) to load an N-dimensional vector. Here, we show that it is possible to load an N-dimensional vector with exponential time advantage using a quantum circuit with polylogarithmic depth and entangled information in ancillary qubits. Results show that we can efficiently load data in quantum devices using a divide-and-conquer strategy to exchange computational time for space. We demonstrate a proof of concept on a real quantum device and present two applications for quantum machine learning. We expect that this new loading strategy allows the quantum speedup of tasks that require to load a significant volume of information to quantum devices.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 645
Author(s):  
Muhammad Farooq ◽  
Sehrish Sarfraz ◽  
Christophe Chesneau ◽  
Mahmood Ul Hassan ◽  
Muhammad Ali Raza ◽  
...  

Expectiles have gained considerable attention in recent years due to wide applications in many areas. In this study, the k-nearest neighbours approach, together with the asymmetric least squares loss function, called ex-kNN, is proposed for computing expectiles. Firstly, the effect of various distance measures on ex-kNN in terms of test error and computational time is evaluated. It is found that Canberra, Lorentzian, and Soergel distance measures lead to minimum test error, whereas Euclidean, Canberra, and Average of (L1,L∞) lead to a low computational cost. Secondly, the performance of ex-kNN is compared with existing packages er-boost and ex-svm for computing expectiles that are based on nine real life examples. Depending on the nature of data, the ex-kNN showed two to 10 times better performance than er-boost and comparable performance with ex-svm regarding test error. Computationally, the ex-kNN is found two to five times faster than ex-svm and much faster than er-boost, particularly, in the case of high dimensional data.


2021 ◽  
Vol 11 (2) ◽  
pp. 813
Author(s):  
Shuai Teng ◽  
Zongchao Liu ◽  
Gongfa Chen ◽  
Li Cheng

This paper compares the crack detection performance (in terms of precision and computational cost) of the YOLO_v2 using 11 feature extractors, which provides a base for realizing fast and accurate crack detection on concrete structures. Cracks on concrete structures are an important indicator for assessing their durability and safety, and real-time crack detection is an essential task in structural maintenance. The object detection algorithm, especially the YOLO series network, has significant potential in crack detection, while the feature extractor is the most important component of the YOLO_v2. Hence, this paper employs 11 well-known CNN models as the feature extractor of the YOLO_v2 for crack detection. The results confirm that a different feature extractor model of the YOLO_v2 network leads to a different detection result, among which the AP value is 0.89, 0, and 0 for ‘resnet18’, ‘alexnet’, and ‘vgg16’, respectively meanwhile, the ‘googlenet’ (AP = 0.84) and ‘mobilenetv2’ (AP = 0.87) also demonstrate comparable AP values. In terms of computing speed, the ‘alexnet’ takes the least computational time, the ‘squeezenet’ and ‘resnet18’ are ranked second and third respectively; therefore, the ‘resnet18’ is the best feature extractor model in terms of precision and computational cost. Additionally, through the parametric study (influence on detection results of the training epoch, feature extraction layer, and testing image size), the associated parameters indeed have an impact on the detection results. It is demonstrated that: excellent crack detection results can be achieved by the YOLO_v2 detector, in which an appropriate feature extractor model, training epoch, feature extraction layer, and testing image size play an important role.


2018 ◽  
Author(s):  
Fabien Maussion ◽  
Anton Butenko ◽  
Julia Eis ◽  
Kévin Fourteau ◽  
Alexander H. Jarosch ◽  
...  

Abstract. Despite of their importance for sea-level rise, seasonal water availability, and as source of geohazards, mountain glaciers are one of the few remaining sub-systems of the global climate system for which no globally applicable, open source, community-driven model exists. Here we present the Open Global Glacier Model (OGGM, http://www.oggm.org), developed to provide a modular and open source numerical model framework for simulating past and future change of any glacier in the world. The modelling chain comprises data downloading tools (glacier outlines, topography, climate, validation data), a preprocessing module, a mass-balance model, a distributed ice thickness estimation model, and an ice flow model. The monthly mass-balance is obtained from gridded climate data and a temperature index melt model. To our knowledge, OGGM is the first global model explicitly simulating glacier dynamics: the model relies on the shallow ice approximation to compute the depth-integrated flux of ice along multiple connected flowlines. In this paper, we describe and illustrate each processing step by applying the model to a selection of glaciers before running global simulations under idealized climate forcings. Even without an in-depth calibration, the model shows a very realistic behaviour. We are able to reproduce earlier estimates of global glacier volume by varying the ice dynamical parameters within a range of plausible values. At the same time, the increased complexity of OGGM compared to other prevalent global glacier models comes at a reasonable computational cost: several dozens of glaciers can be simulated on a personal computer, while global simulations realized in a supercomputing environment take up to a few hours per century. Thanks to the modular framework, modules of various complexity can be added to the codebase, allowing to run new kinds of model intercomparisons in a controlled environment. Future developments will add new physical processes to the model as well as tools to calibrate the model in a more comprehensive way. OGGM spans a wide range of applications, from ice-climate interaction studies at millenial time scales to estimates of the contribution of glaciers to past and future sea-level change. It has the potential to become a self-sustained, community driven model for global and regional glacier evolution.


Author(s):  
Luca Mangani ◽  
David Roos Launchbury ◽  
Ernesto Casartelli ◽  
Giulio Romanelli

The computation of heat transfer phenomena in gas turbines plays a key role in the continuous quest to increase performance and life of both component and machine. In order to assess different cooling approaches computational fluid dynamics (CFD) is a fundamental tool. Until now the task has often been carried out with RANS simulations, mainly due to the relatively short computational time. The clear drawback of this approach is in terms of accuracy, especially in those situations where averaged turbulence-structures are not able to capture the flow physics, thus under or overestimating the local heat transfer. The present work shows the development of a new explicit high-order incompressible solver for time-dependent flows based on the open source C++ Toolbox OpenFOAM framework. As such, the solver is enabled to compute the spatially filtered Navier-Stokes equations applied in large eddy simulations for incompressible flows. An overview of the development methods is provided, presenting numerical and algorithmic details. The solver is verified using the method of manufactured solutions, and a series of numerical experiments is performed to show third-order accuracy in time and low temporal error levels. Typical cooling devices in turbomachinery applications are then investigated, such as the flow over a turbulator geometry involving heated walls and a film cooling application. The performance of various sub-grid-scale models are tested, such as static Smagorinsky, dynamic Lagrangian, dynamic one-equation turbulence models, dynamic Smagorinsky, WALE and sigma-model. Good results were obtained in all cases with variations among the individual models.


2019 ◽  
Vol 99 (2) ◽  
pp. 1105-1130 ◽  
Author(s):  
Kun Yang ◽  
Vladimir Paramygin ◽  
Y. Peter Sheng

Abstract The joint probability method (JPM) is the traditional way to determine the base flood elevation due to storm surge, and it usually requires simulation of storm surge response from tens of thousands of synthetic storms. The simulated storm surge is combined with probabilistic storm rates to create flood maps with various return periods. However, the map production requires enormous computational cost if state-of-the-art hydrodynamic models with high-resolution numerical grids are used; hence, optimal sampling (JPM-OS) with a small number of (~ 100–200) optimal (representative) storms is preferred. This paper presents a significantly improved JPM-OS, where a small number of optimal storms are objectively selected, and simulated storm surge responses of tens of thousands of storms are accurately interpolated from those for the optimal storms using a highly efficient kriging surrogate model. This study focuses on Southwest Florida and considers ~ 150 optimal storms that are selected based on simulations using either the low fidelity (with low resolution and simple physics) SLOSH model or the high fidelity (with high resolution and comprehensive physics) CH3D model. Surge responses to the optimal storms are simulated using both SLOSH and CH3D, and the flood elevations are calculated using JPM-OS with highly efficient kriging interpolations. For verification, the probabilistic inundation maps are compared to those obtained by the traditional JPM and variations of JPM-OS that employ different interpolation schemes, and computed probabilistic water levels are compared to those calculated by historical storm methods. The inundation maps obtained with the JPM-OS differ less than 10% from those obtained with JPM for 20,625 storms, with only 4% of the computational time.


2012 ◽  
Vol 14 (S1) ◽  
Author(s):  
Lukas Havla ◽  
Tamer A Basha ◽  
Hussein Rayatzadeh ◽  
Jaime L Shaw ◽  
Warren J Manning ◽  
...  

Author(s):  
K H Groves ◽  
P Bonello ◽  
P M Hai

Essential to effective aeroengine design is the rapid simulation of the dynamic performance of a variety of engine and non-linear squeeze-film damper (SFD) bearing configurations. Using recently introduced non-linear solvers combined with non-parametric identification of high-accuracy bearing models it is possible to run full-engine rotordynamic simulations, in both the time and frequency domains, at a fraction of the previous computational cost. Using a novel reduced form of Chebyshev polynomial fits, efficient and accurate identification of the numerical solution to the two-dimensional Reynolds equation (RE) is achieved. The engine analysed is a twin-spool five-SFD engine model provided by a leading manufacturer. Whole-engine simulations obtained using Chebyshev-identified bearing models of the finite difference (FD) solution to the RE are compared with those obtained from the original FD bearing models. For both time and frequency domain analysis, the Chebyshev-identified bearing models are shown to mimic accurately and consistently the simulations obtained from the FD models in under 10 per cent of the computational time. An illustrative parameter study is performed to demonstrate the unparalleled capabilities of the combination of recently developed and novel techniques utilised in this paper.


Sign in / Sign up

Export Citation Format

Share Document