scholarly journals Scalable robust graph and feature extraction for arbitrary vessel networks in large volumetric datasets

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Dominik Drees ◽  
Aaron Scherzinger ◽  
René Hägerling ◽  
Friedemann Kiefer ◽  
Xiaoyi Jiang

Abstract Background Recent advances in 3D imaging technologies provide novel insights to researchers and reveal finer and more detail of examined specimen, especially in the biomedical domain, but also impose huge challenges regarding scalability for automated analysis algorithms due to rapidly increasing dataset sizes. In particular, existing research towards automated vessel network analysis does not always consider memory requirements of proposed algorithms and often generates a large number of spurious branches for structures consisting of many voxels. Additionally, very often these algorithms have further restrictions such as the limitation to tree topologies or relying on the properties of specific image modalities. Results We propose a scalable iterative pipeline (in terms of computational cost, required main memory and robustness) that extracts an annotated abstract graph representation from the foreground segmentation of vessel networks of arbitrary topology and vessel shape. The novel iterative refinement process is controlled by a single, dimensionless, a-priori determinable parameter. Conclusions We are able to, for the first time, analyze the topology of volumes of roughly 1 TB on commodity hardware, using the proposed pipeline. We demonstrate improved robustness in terms of surface noise, vessel shape deviation and anisotropic resolution compared to the state of the art. An implementation of the presented pipeline is publicly available in version 5.1 of the volume rendering and processing engine Voreen.

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3327
Author(s):  
Vicente Román ◽  
Luis Payá ◽  
Adrián Peidró ◽  
Mónica Ballesta ◽  
Oscar Reinoso

Over the last few years, mobile robotics has experienced a great development thanks to the wide variety of problems that can be solved with this technology. An autonomous mobile robot must be able to operate in a priori unknown environments, planning its trajectory and navigating to the required target points. With this aim, it is crucial solving the mapping and localization problems with accuracy and acceptable computational cost. The use of omnidirectional vision systems has emerged as a robust choice thanks to the big quantity of information they can extract from the environment. The images must be processed to obtain relevant information that permits solving robustly the mapping and localization problems. The classical frameworks to address this problem are based on the extraction, description and tracking of local features or landmarks. However, more recently, a new family of methods has emerged as a robust alternative in mobile robotics. It consists of describing each image as a whole, what leads to conceptually simpler algorithms. While methods based on local features have been extensively studied and compared in the literature, those based on global appearance still merit a deep study to uncover their performance. In this work, a comparative evaluation of six global-appearance description techniques in localization tasks is carried out, both in terms of accuracy and computational cost. Some sets of images captured in a real environment are used with this aim, including some typical phenomena such as changes in lighting conditions, visual aliasing, partial occlusions and noise.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Sansit Patnaik ◽  
Fabio Semperlotti

AbstractThis study presents the formulation, the numerical solution, and the validation of a theoretical framework based on the concept of variable-order mechanics and capable of modeling dynamic fracture in brittle and quasi-brittle solids. More specifically, the reformulation of the elastodynamic problem via variable and fractional-order operators enables a unique and extremely powerful approach to model nucleation and propagation of cracks in solids under dynamic loading. The resulting dynamic fracture formulation is fully evolutionary, hence enabling the analysis of complex crack patterns without requiring any a priori assumption on the damage location and the growth path, and without using any algorithm to numerically track the evolving crack surface. The evolutionary nature of the variable-order formalism also prevents the need for additional partial differential equations to predict the evolution of the damage field, hence suggesting a conspicuous reduction in complexity and computational cost. Remarkably, the variable-order formulation is naturally capable of capturing extremely detailed features characteristic of dynamic crack propagation such as crack surface roughening as well as single and multiple branching. The accuracy and robustness of the proposed variable-order formulation are validated by comparing the results of direct numerical simulations with experimental data of typical benchmark problems available in the literature.


2021 ◽  
Vol 143 (8) ◽  
Author(s):  
Opeoluwa Owoyele ◽  
Pinaki Pal ◽  
Alvaro Vidal Torreira

AbstractThe use of machine learning (ML)-based surrogate models is a promising technique to significantly accelerate simulation-driven design optimization of internal combustion (IC) engines, due to the high computational cost of running computational fluid dynamics (CFD) simulations. However, training the ML models requires hyperparameter selection, which is often done using trial-and-error and domain expertise. Another challenge is that the data required to train these models are often unknown a priori. In this work, we present an automated hyperparameter selection technique coupled with an active learning approach to address these challenges. The technique presented in this study involves the use of a Bayesian approach to optimize the hyperparameters of the base learners that make up a super learner model. In addition to performing hyperparameter optimization (HPO), an active learning approach is employed, where the process of data generation using simulations, ML training, and surrogate optimization is performed repeatedly to refine the solution in the vicinity of the predicted optimum. The proposed approach is applied to the optimization of a compression ignition engine with control parameters relating to fuel injection, in-cylinder flow, and thermodynamic conditions. It is demonstrated that by automatically selecting the best values of the hyperparameters, a 1.6% improvement in merit value is obtained, compared to an improvement of 1.0% with default hyperparameters. Overall, the framework introduced in this study reduces the need for technical expertise in training ML models for optimization while also reducing the number of simulations needed for performing surrogate-based design optimization.


Author(s):  
Martín A. Pucheta ◽  
Nicolás E. Ulrich ◽  
Alberto Cardona

The graph layout problem arises frequently in the conceptual stage of mechanism design, specially in the enumeration process where a large number of topological solutions must be analyzed. Two main objectives of graph layout are the avoidance or minimization of edge crossings and the aesthetics. Edge crossings cannot be always avoided by force-directed algorithms since they reach a minimum of the energy in dependence with the initial position of the vertices, often randomly generated. Combinatorial algorithms based on the properties of the graph representation of the kinematic chain can be used to find an adequate initial position of the vertices with minimal edge crossings. To select an initial layout, the minimal independent loops of the graph can be drawn as circles followed by arcs, in all forms. The computational cost of this algorithm grows as factorial with the number of independent loops. This paper presents a combination of two algorithms: a combinatorial algorithm followed by a force-directed algorithm based on spring repulsion and electrical attraction, including a new concept of vertex-to-edge repulsion to improve aesthetics and minimize crossings. Atlases of graphs of complex kinematic chains are used to validate the results. The layouts obtained have good quality in terms of minimization of edge crossings and maximization of aesthetic characteristics.


2019 ◽  
Vol 25 (9) ◽  
pp. 1516-1524 ◽  
Author(s):  
Aiman A. Alshare ◽  
Fedrico Calzone ◽  
Maurizio Muzzupappa

Purpose The purpose of this study is to investigate the feasibility of using additive manufacturing (AM) technique to produce an efficient valve manifold for hydraulic actuator by redesigning valve blocks produced by conventional methods. Design/methodology/approach A priori, a computational fluid dynamics (CFD) analysis was carried out using the software ANSYS Fluent to determine the optimal flow path that results in least pressure drop, highest average velocity and least energy losses. Fluid–structure interaction (FSI) simulations, processed with imported pressure distribution from the CFD, were conducted to determine the resulting loading and deformations of the manifold assembly. Findings The new design offers a 23 per cent reduction of oil volume in the circuit, while weighing 84 per cent less. When using the new design, a decrease of pressure drop by nearly 25 per cent and an increase in the average velocity by 2.5 per cent is achieved. A good agreement, within 16 per cent, is found in terms of the pressure drop between the experiment and computational model. Originality/value It is possible to build an efficient hydraulic manifold design by iterative refinement for adequate production via selective laser melting (SLM) and minimize used material to circumventing building support structures in non-machinable features of the manifold.


2019 ◽  
Vol 130 ◽  
pp. 01013
Author(s):  
Hariyo Priambudi Setyo Pratomo ◽  
Fandi Dwiputra Suprianto ◽  
Teng Sutrisno

Turbulence simulation remains one of the active research activities in computational engineering. Along with the increase in computing power and the prime motivation of improving the accuracy of statistical turbulence modeling approaches and reducing the expensive computational cost of both direct numerical and large turbulence scale- resolving simulations, various hybrid turbulence models being capable of capturing unsteadiness in the turbulence are now accessible. Nevertheless this introduces the daunting task to select an appropriate method for different cases as one can not know a priori the inherent nature of the turbulence. It is the aim of this paper to address recent progresses and further researches within a branch of the hybrid RANS-LES models examined by the first author as simple test cases but generating complex turbulent flows are available from experimentation. In particular, failure of a seamless hybrid formulation not explicitly dependent on the grid scale is discussed. From the literature, it is practical that at least one can go on with confidence when choosing a potential hybrid model by intuitively distinguishing between strongly and weakly unstable turbulent flows.


2007 ◽  
Vol 19 (1) ◽  
pp. 80-110 ◽  
Author(s):  
Colin Molter ◽  
Utku Salihoglu ◽  
Hugues Bersini

This letter aims at studying the impact of iterative Hebbian learning algorithms on the recurrent neural network's underlying dynamics. First, an iterative supervised learning algorithm is discussed. An essential improvement of this algorithm consists of indexing the attractor information items by means of external stimuli rather than by using only initial conditions, as Hopfield originally proposed. Modifying the stimuli mainly results in a change of the entire internal dynamics, leading to an enlargement of the set of attractors and potential memory bags. The impact of the learning on the network's dynamics is the following: the more information to be stored as limit cycle attractors of the neural network, the more chaos prevails as the background dynamical regime of the network. In fact, the background chaos spreads widely and adopts a very unstructured shape similar to white noise. Next, we introduce a new form of supervised learning that is more plausible from a biological point of view: the network has to learn to react to an external stimulus by cycling through a sequence that is no longer specified a priori. Based on its spontaneous dynamics, the network decides “on its own” the dynamical patterns to be associated with the stimuli. Compared with classical supervised learning, huge enhancements in storing capacity and computational cost have been observed. Moreover, this new form of supervised learning, by being more “respectful” of the network intrinsic dynamics, maintains much more structure in the obtained chaos. It is still possible to observe the traces of the learned attractors in the chaotic regime. This complex but still very informative regime is referred to as “frustrated chaos.”


2020 ◽  
Vol 54 (2) ◽  
pp. 649-677 ◽  
Author(s):  
Abdul-Lateef Haji-Ali ◽  
Fabio Nobile ◽  
Raúl Tempone ◽  
Sören Wolfers

Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.


2004 ◽  
Vol 64 (3a) ◽  
pp. 383-398
Author(s):  
M. L. Christoffersen ◽  
M. E. Araújo ◽  
M. A. M. Moreira

Total sequence phylogenies have low information content. Ordinary misconceptions are that character quality can be ignored and that relying on computer algorithms is enough. Despite widespread preference for a posteriori methods of character evaluation, a priori methods are necessary to produce transformation series that are independent of tree topologies. We propose a stepwise qualitative method for analyzing protein sequences. Informative codons are selected, alternative amino acid transformation series are analyzed, and most parsimonious transformations are hypothesized. We conduct four phylogenetic analyses of philodryanine snakes. The tree based on all nucleotides produces least resolution. Trees based on the exclusion of third positions, on an asymmetric step matrix, and on our protocol, produce similar results. Our method eliminates noise by hypothesizing explicit transformation series for each informative protein-coding amino acid. This approaches qualitative methods for morphological data, in which only characters successfully interpreted in a phylogenetic context are used in cladistic analyses. The method allows utilizing character information contained in the original sequence alignment and, therefore, has higher resolution in inferring a phylogenetic tree than some traditional methods (such as distance methods).


Geophysics ◽  
2020 ◽  
Vol 85 (3) ◽  
pp. R177-R194 ◽  
Author(s):  
Mattia Aleardi ◽  
Alessandro Salusti

A reliable assessment of the posterior uncertainties is a crucial aspect of any amplitude versus angle (AVA) inversion due to the severe ill-conditioning of this inverse problem. To accomplish this task, numerical Markov chain Monte Carlo algorithms are usually used when the forward operator is nonlinear. The downside of these algorithms is the considerable number of samples needed to attain stable posterior estimations especially in high-dimensional spaces. To overcome this issue, we assessed the suitability of Hamiltonian Monte Carlo (HMC) algorithm for nonlinear target- and interval-oriented AVA inversions for the estimation of elastic properties and associated uncertainties from prestack seismic data. The target-oriented approach inverts the AVA responses of the target reflection by adopting the nonlinear Zoeppritz equations, whereas the interval-oriented method inverts the seismic amplitudes along a time interval using a 1D convolutional forward model still based on the Zoeppritz equations. HMC uses an artificial Hamiltonian system in which a model is viewed as a particle moving along a trajectory in an extended space. In this context, the inclusion of the derivative information of the misfit function makes possible long-distance moves with a high probability of acceptance from the current position toward a new independent model. In our application, we adopt a simple Gaussian a priori distribution that allows for an analytical inclusion of geostatistical constraints into the inversion framework, and we also develop a strategy that replaces the numerical computation of the Jacobian with a matrix operator analytically derived from a linearization of the Zoeppritz equations. Synthetic and field data inversions demonstrate that the HMC is a very promising approach for Bayesian AVA inversion that guarantees an efficient sampling of the model space and retrieves reliable estimations and accurate uncertainty quantifications with an affordable computational cost.


Sign in / Sign up

Export Citation Format

Share Document