scholarly journals Core Imaging Library - Part II: multichannel reconstruction for dynamic and spectral tomography

Author(s):  
Evangelos Papoutsellis ◽  
Evelina Ametova ◽  
Claire Delplancke ◽  
Gemma Fardell ◽  
Jakob S. Jørgensen ◽  
...  

The newly developed core imaging library (CIL) is a flexible plug and play library for tomographic imaging with a specific focus on iterative reconstruction. CIL provides building blocks for tailored regularized reconstruction algorithms and explicitly supports multichannel tomographic data. In the first part of this two-part publication, we introduced the fundamentals of CIL. This paper focuses on applications of CIL for multichannel data, e.g. dynamic and spectral. We formalize different optimization problems for colour processing, dynamic and hyperspectral tomography and demonstrate CIL’s capabilities for designing state-of-the-art reconstruction methods through case studies and code snapshots. This article is part of the theme issue ‘Synergistic tomographic image reconstruction: part 2’.

Author(s):  
J. S. Jørgensen ◽  
E. Ametova ◽  
G. Burca ◽  
G. Fardell ◽  
E. Papoutsellis ◽  
...  

We present the Core Imaging Library (CIL), an open-source Python framework for tomographic imaging with particular emphasis on reconstruction of challenging datasets. Conventional filtered back-projection reconstruction tends to be insufficient for highly noisy, incomplete, non-standard or multi-channel data arising for example in dynamic, spectral and in situ tomography. CIL provides an extensive modular optimization framework for prototyping reconstruction methods including sparsity and total variation regularization, as well as tools for loading, preprocessing and visualizing tomographic data. The capabilities of CIL are demonstrated on a synchrotron example dataset and three challenging cases spanning golden-ratio neutron tomography, cone-beam X-ray laminography and positron emission tomography. This article is part of the theme issue ‘Synergistic tomographic image reconstruction: part 2’.


Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1517
Author(s):  
Xinsheng Wang ◽  
Xiyue Wang

True random number generators (TRNGs) have been a research hotspot due to secure encryption algorithm requirements. Therefore, such circuits are necessary building blocks in state-of-the-art security controllers. In this paper, a TRNG based on random telegraph noise (RTN) with a controllable rate is proposed. A novel method of noise array circuits is presented, which consists of digital decoder circuits and RTN noise circuits. The frequency of generating random numbers is controlled by the speed of selecting different gating signals. The results of simulation show that the array circuits consist of 64 noise source circuits that can generate random numbers by a frequency from 1 kHz to 16 kHz.


Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1136
Author(s):  
David Augusto Ribeiro ◽  
Juan Casavílca Silva ◽  
Renata Lopes Rosa ◽  
Muhammad Saadi ◽  
Shahid Mumtaz ◽  
...  

Light field (LF) imaging has multi-view properties that help to create many applications that include auto-refocusing, depth estimation and 3D reconstruction of images, which are required particularly for intelligent transportation systems (ITSs). However, cameras can present a limited angular resolution, becoming a bottleneck in vision applications. Thus, there is a challenge to incorporate angular data due to disparities in the LF images. In recent years, different machine learning algorithms have been applied to both image processing and ITS research areas for different purposes. In this work, a Lightweight Deformable Deep Learning Framework is implemented, in which the problem of disparity into LF images is treated. To this end, an angular alignment module and a soft activation function into the Convolutional Neural Network (CNN) are implemented. For performance assessment, the proposed solution is compared with recent state-of-the-art methods using different LF datasets, each one with specific characteristics. Experimental results demonstrated that the proposed solution achieved a better performance than the other methods. The image quality results obtained outperform state-of-the-art LF image reconstruction methods. Furthermore, our model presents a lower computational complexity, decreasing the execution time.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 146
Author(s):  
Aleksei Vakhnin ◽  
Evgenii Sopov

Modern real-valued optimization problems are complex and high-dimensional, and they are known as “large-scale global optimization (LSGO)” problems. Classic evolutionary algorithms (EAs) perform poorly on this class of problems because of the curse of dimensionality. Cooperative Coevolution (CC) is a high-performed framework for performing the decomposition of large-scale problems into smaller and easier subproblems by grouping objective variables. The efficiency of CC strongly depends on the size of groups and the grouping approach. In this study, an improved CC (iCC) approach for solving LSGO problems has been proposed and investigated. iCC changes the number of variables in subcomponents dynamically during the optimization process. The SHADE algorithm is used as a subcomponent optimizer. We have investigated the performance of iCC-SHADE and CC-SHADE on fifteen problems from the LSGO CEC’13 benchmark set provided by the IEEE Congress of Evolutionary Computation. The results of numerical experiments have shown that iCC-SHADE outperforms, on average, CC-SHADE with a fixed number of subcomponents. Also, we have compared iCC-SHADE with some state-of-the-art LSGO metaheuristics. The experimental results have shown that the proposed algorithm is competitive with other efficient metaheuristics.


Author(s):  
Paul S. Addison

Redundancy: it is a word heavy with connotations of lacking usefulness. I often hear that the rationale for not using the continuous wavelet transform (CWT)—even when it appears most appropriate for the problem at hand—is that it is ‘redundant’. Sometimes the conversation ends there, as if self-explanatory. However, in the context of the CWT, ‘redundant’ is not a pejorative term, it simply refers to a less compact form used to represent the information within the signal. The benefit of this new form—the CWT—is that it allows for intricate structural characteristics of the signal information to be made manifest within the transform space, where it can be more amenable to study: resolution over redundancy. Once the signal information is in CWT form, a range of powerful analysis methods can then be employed for its extraction, interpretation and/or manipulation. This theme issue is intended to provide the reader with an overview of the current state of the art of CWT analysis methods from across a wide range of numerate disciplines, including fluid dynamics, structural mechanics, geophysics, medicine, astronomy and finance. This article is part of the theme issue ‘Redundancy rules: the continuous wavelet transform comes of age’.


Author(s):  
Michał R. Nowicki ◽  
Dominik Belter ◽  
Aleksander Kostusiak ◽  
Petr Cížek ◽  
Jan Faigl ◽  
...  

Purpose This paper aims to evaluate four different simultaneous localization and mapping (SLAM) systems in the context of localization of multi-legged walking robots equipped with compact RGB-D sensors. This paper identifies problems related to in-motion data acquisition in a legged robot and evaluates the particular building blocks and concepts applied in contemporary SLAM systems against these problems. The SLAM systems are evaluated on two independent experimental set-ups, applying a well-established methodology and performance metrics. Design/methodology/approach Four feature-based SLAM architectures are evaluated with respect to their suitability for localization of multi-legged walking robots. The evaluation methodology is based on the computation of the absolute trajectory error (ATE) and relative pose error (RPE), which are performance metrics well-established in the robotics community. Four sequences of RGB-D frames acquired in two independent experiments using two different six-legged walking robots are used in the evaluation process. Findings The experiments revealed that the predominant problem characteristics of the legged robots as platforms for SLAM are the abrupt and unpredictable sensor motions, as well as oscillations and vibrations, which corrupt the images captured in-motion. The tested adaptive gait allowed the evaluated SLAM systems to reconstruct proper trajectories. The bundle adjustment-based SLAM systems produced best results, thanks to the use of a map, which enables to establish a large number of constraints for the estimated trajectory. Research limitations/implications The evaluation was performed using indoor mockups of terrain. Experiments in more natural and challenging environments are envisioned as part of future research. Practical implications The lack of accurate self-localization methods is considered as one of the most important limitations of walking robots. Thus, the evaluation of the state-of-the-art SLAM methods on legged platforms may be useful for all researchers working on walking robots’ autonomy and their use in various applications, such as search, security, agriculture and mining. Originality/value The main contribution lies in the integration of the state-of-the-art SLAM methods on walking robots and their thorough experimental evaluation using a well-established methodology. Moreover, a SLAM system designed especially for RGB-D sensors and real-world applications is presented in details.


2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Hsuan-Ming Huang ◽  
Ing-Tsung Hsiao

Background and Objective. Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques.Methods. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively.Results. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method.Conclusions. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.


2013 ◽  
Vol 2013 ◽  
pp. 1-14
Author(s):  
Joshua Kim ◽  
Huaiqun Guan ◽  
David Gersten ◽  
Tiezhi Zhang

Tetrahedron beam computed tomography (TBCT) performs volumetric imaging using a stack of fan beams generated by a multiple pixel X-ray source. While the TBCT system was designed to overcome the scatter and detector issues faced by cone beam computed tomography (CBCT), it still suffers the same large cone angle artifacts as CBCT due to the use of approximate reconstruction algorithms. It has been shown that iterative reconstruction algorithms are better able to model irregular system geometries and that algebraic iterative algorithms in particular have been able to reduce cone artifacts appearing at large cone angles. In this paper, the SART algorithm is modified for the use with the different TBCT geometries and is tested using both simulated projection data and data acquired using the TBCT benchtop system. The modified SART reconstruction algorithms were able to mitigate the effects of using data generated at large cone angles and were also able to reconstruct CT images without the introduction of artifacts due to either the longitudinal or transverse truncation in the data sets. Algebraic iterative reconstruction can be especially useful for dual-source dual-detector TBCT, wherein the cone angle is the largest in the center of the field of view.


2021 ◽  
Vol 1 (2) ◽  
pp. 1-23
Author(s):  
Arkadiy Dushatskiy ◽  
Tanja Alderliesten ◽  
Peter A. N. Bosman

Surrogate-assisted evolutionary algorithms have the potential to be of high value for real-world optimization problems when fitness evaluations are expensive, limiting the number of evaluations that can be performed. In this article, we consider the domain of pseudo-Boolean functions in a black-box setting. Moreover, instead of using a surrogate model as an approximation of a fitness function, we propose to precisely learn the coefficients of the Walsh decomposition of a fitness function and use the Walsh decomposition as a surrogate. If the coefficients are learned correctly, then the Walsh decomposition values perfectly match with the fitness function, and, thus, the optimal solution to the problem can be found by optimizing the surrogate without any additional evaluations of the original fitness function. It is known that the Walsh coefficients can be efficiently learned for pseudo-Boolean functions with k -bounded epistasis and known problem structure. We propose to learn dependencies between variables first and, therefore, substantially reduce the number of Walsh coefficients to be calculated. After the accurate Walsh decomposition is obtained, the surrogate model is optimized using GOMEA, which is considered to be a state-of-the-art binary optimization algorithm. We compare the proposed approach with standard GOMEA and two other Walsh decomposition-based algorithms. The benchmark functions in the experiments are well-known trap functions, NK-landscapes, MaxCut, and MAX3SAT problems. The experimental results demonstrate that the proposed approach is scalable at the supposed complexity of O (ℓ log ℓ) function evaluations when the number of subfunctions is O (ℓ) and all subfunctions are k -bounded, outperforming all considered algorithms.


2021 ◽  
Vol 7 (2) ◽  
pp. 19
Author(s):  
Tirivangani Magadza ◽  
Serestina Viriri

Quantitative analysis of the brain tumors provides valuable information for understanding the tumor characteristics and treatment planning better. The accurate segmentation of lesions requires more than one image modalities with varying contrasts. As a result, manual segmentation, which is arguably the most accurate segmentation method, would be impractical for more extensive studies. Deep learning has recently emerged as a solution for quantitative analysis due to its record-shattering performance. However, medical image analysis has its unique challenges. This paper presents a review of state-of-the-art deep learning methods for brain tumor segmentation, clearly highlighting their building blocks and various strategies. We end with a critical discussion of open challenges in medical image analysis.


Sign in / Sign up

Export Citation Format

Share Document