scholarly journals Efficient Hyper-Parameter Selection in Total Variation-Penalised XCT Reconstruction Using Freund and Shapire’s Hedge Approach

Mathematics ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 493 ◽  
Author(s):  
Stéphane Chrétien ◽  
Manasavee Lohvithee ◽  
Wenjuan Sun ◽  
Manuchehr Soleimani

This paper studies the problem of efficiently tuning the hyper-parameters in penalised least-squares reconstruction for XCT. Discovered through the lens of the Compressed Sensing paradigm, penalisation functionals such as Total Variation types of norms, form an essential tool for enforcing structure in inverse problems, a key feature in the case where the number of projections is small as compared to the size of the object to recover. In this paper, we propose a novel hyper-parameter selection approach for total variation (TV)-based reconstruction algorithms, based on a boosting type machine learning procedure initially proposed by Freund and Shapire and called Hedge. The proposed approach is able to select a set of hyper-parameters producing better reconstruction than the traditional Cross-Validation approach, with reduced computational effort. Traditional reconstruction methods based on penalisation can be made more efficient using boosting type methods from machine learning.

Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3701 ◽  
Author(s):  
Jin Zheng ◽  
Jinku Li ◽  
Yi Li ◽  
Lihui Peng

Electrical Capacitance Tomography (ECT) image reconstruction has developed for decades and made great achievements, but there is still a need to find a new theoretical framework to make it better and faster. In recent years, machine learning theory has been introduced in the ECT area to solve the image reconstruction problem. However, there is still no public benchmark dataset in the ECT field for the training and testing of machine learning-based image reconstruction algorithms. On the other hand, a public benchmark dataset can provide a standard framework to evaluate and compare the results of different image reconstruction methods. In this paper, a benchmark dataset for ECT image reconstruction is presented. Like the great contribution of ImageNet that transformed machine learning research, this benchmark dataset is hoped to be helpful for society to investigate new image reconstruction algorithms since the relationship between permittivity distribution and capacitance can be better mapped. In addition, different machine learning-based image reconstruction algorithms can be trained and tested by the unified dataset, and the results can be evaluated and compared under the same standard, thus, making the ECT image reconstruction study more open and causing a breakthrough.


Author(s):  
Mohsen Nourazar ◽  
Bart Goossens

AbstractTensor Cores are specialized hardware units added to recent NVIDIA GPUs to speed up matrix multiplication-related tasks, such as convolutions and densely connected layers in neural networks. Due to their specific hardware implementation and programming model, Tensor Cores cannot be straightforwardly applied to other applications outside machine learning. In this paper, we demonstrate the feasibility of using NVIDIA Tensor Cores for the acceleration of a non-machine learning application: iterative Computed Tomography (CT) reconstruction. For large CT images and real-time CT scanning, the reconstruction time for many existing iterative reconstruction methods is relatively high, ranging from seconds to minutes, depending on the size of the image. Therefore, CT reconstruction is an application area that could potentially benefit from Tensor Core hardware acceleration. We first studied the reconstruction algorithm’s performance as a function of the hardware related parameters and proposed an approach to accelerate reconstruction on Tensor Cores. The results show that the proposed method provides about 5 $$\times $$ × increase in speed and energy saving using the NVIDIA RTX 2080 Ti GPU for the parallel projection of 32 images of size $$512\times 512$$ 512 × 512 . The relative reconstruction error due to the mixed-precision computations was almost equal to the error of single-precision (32-bit) floating-point computations. We then presented an approach for real-time and memory-limited applications by exploiting the symmetry of the system (i.e., the acquisition geometry). As the proposed approach is based on the conjugate gradient method, it can be generalized to extend its application to many research and industrial fields.


2020 ◽  
Vol 25 (40) ◽  
pp. 4296-4302 ◽  
Author(s):  
Yuan Zhang ◽  
Zhenyan Han ◽  
Qian Gao ◽  
Xiaoyi Bai ◽  
Chi Zhang ◽  
...  

Background: β thalassemia is a common monogenic genetic disease that is very harmful to human health. The disease arises is due to the deletion of or defects in β-globin, which reduces synthesis of the β-globin chain, resulting in a relatively excess number of α-chains. The formation of inclusion bodies deposited on the cell membrane causes a decrease in the ability of red blood cells to deform and a group of hereditary haemolytic diseases caused by massive destruction in the spleen. Methods: In this work, machine learning algorithms were employed to build a prediction model for inhibitors against K562 based on 117 inhibitors and 190 non-inhibitors. Results: The overall accuracy (ACC) of a 10-fold cross-validation test and an independent set test using Adaboost were 83.1% and 78.0%, respectively, surpassing Bayes Net, Random Forest, Random Tree, C4.5, SVM, KNN and Bagging. Conclusion: This study indicated that Adaboost could be applied to build a learning model in the prediction of inhibitors against K526 cells.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 591
Author(s):  
Manasavee Lohvithee ◽  
Wenjuan Sun ◽  
Stephane Chretien ◽  
Manuchehr Soleimani

In this paper, a computer-aided training method for hyperparameter selection of limited data X-ray computed tomography (XCT) reconstruction was proposed. The proposed method employed the ant colony optimisation (ACO) approach to assist in hyperparameter selection for the adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm, which is a total-variation (TV) based regularisation algorithm. During the implementation, there was a colony of artificial ants that swarm through the AwPCSD algorithm. Each ant chose a set of hyperparameters required for its iterative CT reconstruction and the correlation coefficient (CC) score was given for reconstructed images compared to the reference image. A colony of ants in one generation left a pheromone through its chosen path representing a choice of hyperparameters. Higher score means stronger pheromones/probabilities to attract more ants in the next generations. At the end of the implementation, the hyperparameter configuration with the highest score was chosen as an optimal set of hyperparameters. In the experimental results section, the reconstruction using hyperparameters from the proposed method was compared with results from three other cases: the conjugate gradient least square (CGLS), the AwPCSD algorithm using the set of arbitrary hyperparameters and the cross-validation method.The experiments showed that the results from the proposed method were superior to those of the CGLS algorithm and the AwPCSD algorithm using the set of arbitrary hyperparameters. Although the results of the ACO algorithm were slightly inferior to those of the cross-validation method as measured by the quantitative metrics, the ACO algorithm was over 10 times faster than cross—Validation. The optimal set of hyperparameters from the proposed method was also robust against an increase of noise in the data and can be applicable to different imaging samples with similar context. The ACO approach in the proposed method was able to identify optimal values of hyperparameters for a dataset and, as a result, produced a good quality reconstructed image from limited number of projection data. The proposed method in this work successfully solves a problem of hyperparameters selection, which is a major challenge in an implementation of TV based reconstruction algorithms.


2021 ◽  
Vol 13 (3) ◽  
pp. 408
Author(s):  
Charles Nickmilder ◽  
Anthony Tedde ◽  
Isabelle Dufrasne ◽  
Françoise Lessire ◽  
Bernard Tychon ◽  
...  

Accurate information about the available standing biomass on pastures is critical for the adequate management of grazing and its promotion to farmers. In this paper, machine learning models are developed to predict available biomass expressed as compressed sward height (CSH) from readily accessible meteorological, optical (Sentinel-2) and radar satellite data (Sentinel-1). This study assumed that combining heterogeneous data sources, data transformations and machine learning methods would improve the robustness and the accuracy of the developed models. A total of 72,795 records of CSH with a spatial positioning, collected in 2018 and 2019, were used and aggregated according to a pixel-like pattern. The resulting dataset was split into a training one with 11,625 pixellated records and an independent validation one with 4952 pixellated records. The models were trained with a 19-fold cross-validation. A wide range of performances was observed (with mean root mean square error (RMSE) of cross-validation ranging from 22.84 mm of CSH to infinite-like values), and the four best-performing models were a cubist, a glmnet, a neural network and a random forest. These models had an RMSE of independent validation lower than 20 mm of CSH at the pixel-level. To simulate the behavior of the model in a decision support system, performances at the paddock level were also studied. These were computed according to two scenarios: either the predictions were made at a sub-parcel level and then aggregated, or the data were aggregated at the parcel level and the predictions were made for these aggregated data. The results obtained in this study were more accurate than those found in the literature concerning pasture budgeting and grassland biomass evaluation. The training of the 124 models resulting from the described framework was part of the realization of a decision support system to help farmers in their daily decision making.


2021 ◽  
pp. 1-15
Author(s):  
Sung Hoon Kang ◽  
Bo Kyoung Cheon ◽  
Ji-Sun Kim ◽  
Hyemin Jang ◽  
Hee Jin Kim ◽  
...  

Background: Amyloid (Aβ) evaluation in amnestic mild cognitive impairment (aMCI) patients is important for predicting conversion to Alzheimer’s disease. However, Aβ evaluation through amyloid positron emission tomography (PET) is limited due to high cost and safety issues. Objective: We therefore aimed to develop and validate prediction models of Aβ positivity for aMCI using optimal interpretable machine learning (ML) approaches utilizing multimodal markers. Methods: We recruited 529 aMCI patients from multiple centers who underwent Aβ PET. We trained ML algorithms using a training cohort (324 aMCI from Samsung medical center) with two-phase modelling: model 1 included age, gender, education, diabetes, hypertension, apolipoprotein E genotype, and neuropsychological test scores; model 2 included the same variables as model 1 with additional MRI features. We used four-fold cross-validation during the modelling and evaluated the models on an external validation cohort (187 aMCI from the other centers). Results: Model 1 showed good accuracy (area under the receiver operating characteristic curve [AUROC] 0.837) in cross-validation, and fair accuracy (AUROC 0.765) in external validation. Model 2 led to improvement in the prediction performance with good accuracy (AUROC 0.892) in cross validation compared to model 1. Apolipoprotein E genotype, delayed recall task scores, and interaction between cortical thickness in the temporal region and hippocampal volume were the most important predictors of Aβ positivity. Conclusion: Our results suggest that ML models are effective in predicting Aβ positivity at the individual level and could help the biomarker-guided diagnosis of prodromal AD.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Carly A. Bobak ◽  
Lili Kang ◽  
Lesley Workman ◽  
Lindy Bateman ◽  
Mohammad S. Khan ◽  
...  

AbstractPediatric tuberculosis (TB) remains a global health crisis. Despite progress, pediatric patients remain difficult to diagnose, with approximately half of all childhood TB patients lacking bacterial confirmation. In this pilot study (n = 31), we identify a 4-compound breathprint and subsequent machine learning model that accurately classifies children with confirmed TB (n = 10) from children with another lower respiratory tract infection (LRTI) (n = 10) with a sensitivity of 80% and specificity of 100% observed across cross validation folds. Importantly, we demonstrate that the breathprint identified an additional nine of eleven patients who had unconfirmed clinical TB and whose symptoms improved while treated for TB. While more work is necessary to validate the utility of using patient breath to diagnose pediatric TB, it shows promise as a triage instrument or paired as part of an aggregate diagnostic scheme.


2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Hsuan-Ming Huang ◽  
Ing-Tsung Hsiao

Background and Objective. Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques.Methods. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively.Results. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method.Conclusions. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.


2013 ◽  
Vol 2013 ◽  
pp. 1-14
Author(s):  
Joshua Kim ◽  
Huaiqun Guan ◽  
David Gersten ◽  
Tiezhi Zhang

Tetrahedron beam computed tomography (TBCT) performs volumetric imaging using a stack of fan beams generated by a multiple pixel X-ray source. While the TBCT system was designed to overcome the scatter and detector issues faced by cone beam computed tomography (CBCT), it still suffers the same large cone angle artifacts as CBCT due to the use of approximate reconstruction algorithms. It has been shown that iterative reconstruction algorithms are better able to model irregular system geometries and that algebraic iterative algorithms in particular have been able to reduce cone artifacts appearing at large cone angles. In this paper, the SART algorithm is modified for the use with the different TBCT geometries and is tested using both simulated projection data and data acquired using the TBCT benchtop system. The modified SART reconstruction algorithms were able to mitigate the effects of using data generated at large cone angles and were also able to reconstruct CT images without the introduction of artifacts due to either the longitudinal or transverse truncation in the data sets. Algebraic iterative reconstruction can be especially useful for dual-source dual-detector TBCT, wherein the cone angle is the largest in the center of the field of view.


Sign in / Sign up

Export Citation Format

Share Document