Comparative Study of Matlab ODE Solvers for the Korakianitis and Shi Model

Author(s):  
Eyere Emagbetere ◽  
Oluleke O. Oluwole ◽  
Tajudeen A.O. Salau

Changing parameters of the Korakianitis and Shi heart valve model over a cardiac cycle has led to the investigation of appropriate numerical technique(s) for good speed and accuracy. Two sets of parameters were selected for the numerical test. For the seven MATLAB ODE solvers, the computed results, computational cost and execution time were observed for varied error tolerance and initial time steps. The results were evaluated with descriptive statistics; the Pearson correlation and ANOVA at The dependence of the computed result, accuracy of the method, computational cost and execution time of all the solvers, on relative tolerance and initial time steps were ascertained. Our findings provide important information that can be useful for selecting a MATLAB ODE solver suitable for differential equation with time varying parameters and changing stiffness properties.

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1430
Author(s):  
Xiaogang Jia ◽  
Wei Chen ◽  
Zhengfa Liang ◽  
Xin Luo ◽  
Mingfei Wu ◽  
...  

Stereo matching is an important research field of computer vision. Due to the dimension of cost aggregation, current neural network-based stereo methods are difficult to trade-off speed and accuracy. To this end, we integrate fast 2D stereo methods with accurate 3D networks to improve performance and reduce running time. We leverage a 2D encoder-decoder network to generate a rough disparity map and construct a disparity range to guide the 3D aggregation network, which can significantly improve the accuracy and reduce the computational cost. We use a stacked hourglass structure to refine the disparity from coarse to fine. We evaluated our method on three public datasets. According to the KITTI official website results, Our network can generate an accurate result in 80 ms on a modern GPU. Compared to other 2D stereo networks (AANet, DeepPruner, FADNet, etc.), our network has a big improvement in accuracy. Meanwhile, it is significantly faster than other 3D stereo networks (5× than PSMNet, 7.5× than CSN and 22.5× than GANet, etc.), demonstrating the effectiveness of our method.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 3943
Author(s):  
Nicolas Montés ◽  
Francisco Chinesta ◽  
Marta C. Mora ◽  
Antonio Falcó ◽  
Lucia Hilario ◽  
...  

This paper presents a real-time global path planning method for mobile robots using harmonic functions, such as the Poisson equation, based on the Proper Generalized Decomposition (PGD) of these functions. The main property of the proposed technique is that the computational cost is negligible in real-time, even if the robot is disturbed or the goal is changed. The main idea of the method is the off-line generation, for a given environment, of the whole set of paths from any start and goal configurations of a mobile robot, namely the computational vademecum, derived from a harmonic potential field in order to use it on-line for decision-making purposes. Up until now, the resolution of the Laplace or Poisson equations has been based on traditional numerical techniques unfeasible for real-time calculation. This drawback has prevented the extensive use of harmonic functions in autonomous navigation, despite their powerful properties. The numerical technique that reverses this situation is the Proper Generalized Decomposition. To demonstrate and validate the properties of the PGD-vademecum in a potential-guided path planning framework, both real and simulated implementations have been developed. Simulated scenarios, such as an L-Shaped corridor and a benchmark bug trap, are used, and a real navigation of a LEGO®MINDSTORMS robot running in static environments with variable start and goal configurations is shown. This device has been selected due to its computational and memory-restricted capabilities, and it is a good example of how its properties could help the development of social robots.


2019 ◽  
Vol 17 (06) ◽  
pp. 1950077 ◽  
Author(s):  
Sheng-Tong Zhou ◽  
Qian Xiao ◽  
Jian-Min Zhou ◽  
Hong-Guang Li

Rackwitz–Fiessler (RF) method is well accepted as an efficient way to solve the uncorrelated non-Normal reliability problems by transforming original non-Normal variables into equivalent Normal variables based on the equivalent Normal conditions. However, this traditional RF method is often abandoned when correlated reliability problems are involved, because the point-by-point implementation property of equivalent Normal conditions makes the RF method hard to clearly describe the correlations of transformed variables. To this end, some improvements on the traditional RF method are presented from the isoprobabilistic transformation and copula theory viewpoints. First of all, the forward transformation process of RF method from the original space to the standard Normal space is interpreted as the isoprobabilistic transformation from the geometric point of view. This viewpoint makes us reasonably describe the stochastic dependence of transformed variables same as that in Nataf transformation (NATAF). Thus, a corresponding enhanced RF (EnRF) method is proposed to deal with the correlated reliability problems described by Pearson linear correlation. Further, we uncover the implicit Gaussian copula hypothesis of RF method according to the invariant theorem of copula and the strictly increasing isoprobabilistic transformation. Meanwhile, based on the copula-only rank correlations such as the Spearman and Kendall correlations, two improved RF (IRF) methods are introduced to overcome the potential pitfalls of Pearson correlation in EnRF. Later, taking NATAF as a reference, the computational cost and efficiency of above three proposed RF methods are also discussed in Hasofer–Lind reliability algorithm. Finally, four illustrative structure reliability examples are demonstrated to validate the availability and advantages of the new proposed RF methods.


2014 ◽  
Vol 759 ◽  
pp. 676-700 ◽  
Author(s):  
C. Rodas ◽  
M. Pulido

AbstractThe propagation of transient inertio-gravity waves in a shear flow is examined using the Gaussian beam formulation. This formulation assumes Gaussian wavepackets in the spectral space and uses a second-order Taylor expansion of the phase of the wave field. In this sense, the Gaussian beam formulation is also an asymptotic approximation like spatial ray tracing; however, the first one is free of the singularities found in spatial ray tracing at caustics. Therefore, the Gaussian beam formulation permits the examination of the evolution of transient inertio-gravity wavepackets from the initial time up to the destabilization of the flow close to the critical levels. We show that the transience favours the development of the dynamical instability relative to the convective instability. In particular, there is a well-defined threshold for which small initial amplitude transient inertio-gravity waves never reach the convective instability criterion. This threshold does not exist for steady-state inertio-gravity waves for which the wave amplitude increases indefinitely towards the critical level. The Gaussian beam formulation is shown to be a powerful tool to treat analytically several aspects of inertio-gravity waves in simple shear flows. In more realistic shear flows, its numerical implementation is readily available and the required numerical calculations have a low computational cost.


2020 ◽  
Author(s):  
Lídia Rocha ◽  
Kelen Vivaldini

Unmanned Aerial Vehicle (UAV) has been increasingly employed in several missions with a pre-defined path. Over the years, UAV has become necessary in complex environments, where it demands high computational cost and execution time for traditional algorithms. To solve this problem meta-heuristic algorithms are used. Meta-heuristics are generic algorithms to solve problems without having to describe each step until the result and search for the best possible answer in an acceptable computational time. The simulations are made in Python, with it, a statistical analyses was realized based on execution time and path length between algorithms Particle Swarm Optimization (PSO), Grey Wolf Optimization (GWO) and Glowworm Swarm Optimization (GSO). Despite the GWO returns the paths in a shorter time, the PSO showed better performance with similar execution time and shorter path length. However, the reliability of the algorithms will depend on the size of the environment. PSO is less reliable in large environments, while the GWO maintains the same reliability.


2019 ◽  
Vol 10 (3) ◽  
pp. 380-392
Author(s):  
Georgios I. Giannopoulos ◽  
Stelios K. Georgantzinos ◽  
Androniki Tsiamaki ◽  
Nicolaos Anifantis

Purpose The purpose of this paper is the computation of the elastic mechanical behaviour of the fullerene C60 reinforced polyamide-12 (PA-12) via a two-stage numerical technique which combines the molecular dynamics (MD) method and the finite element method (FEM). Design/methodology/approach At the first stage, the proposed numerical scheme utilizes MD to characterize the pure PA-12 as well as a very small cubic unit cell containing a C60 molecule, centrally positioned and surrounded by PA-12 molecular chains. At the second stage, a classical continuum mechanics (CM) analysis based on the FEM is adopted to approximate the elastic mechanical performance of the nanocomposite with significantly lower C60 mass concentrations. According to the computed elastic properties arisen by the MD simulations, an equivalent solid element with the same size as the unit cell is developed. Then, a CM micromechanical representative volume element (RVE) of the C60 reinforced PA-12 is modelled via FEM. The matrix phase of the RVE is discretized by using solid finite elements which represent the PA-12 mechanical behaviour predicted by MD, while the C60 neighbouring location is meshed with the equivalent solid element. Findings Several multiscale simulations are performed to study the effect of the nanofiller mass fraction on the mechanical properties of the C60 reinforced PA-12 composite. Comparisons with other corresponding experimental results are attempted, where possible, to test the performance of the proposed method. Originality/value The proposed numerical scheme allows accurate representation of atomistic interfacial effects between C60 and PA-12 and simultaneously offers a significantly lower computational cost compared with the MD-only method.


SPE Journal ◽  
2013 ◽  
Vol 19 (02) ◽  
pp. 304-315 ◽  
Author(s):  
Yuhe Wang ◽  
John E. Killough

Summary The quest for efficient and scalable parallel reservoir simulators has been evolving with the advancement of high-performance computing architectures. Among the various challenges of efficiency and scalability, load imbalance is a major obstacle that has not been fully addressed and solved. The causes of load imbalance in parallel reservoir simulation are both static and dynamic. Robust graph-partitioning algorithms are capable of handling static load imbalance by decomposing the underlying reservoir geometry to distribute a roughly equal load to each processor. However, these loads that are determined by a static load balancer seldom remain unchanged as the simulation proceeds in time. This so-called dynamic imbalance can be exacerbated further in parallel compositional simulations. The flash calculations for equations of state (EOSs) in complex compositional simulations not only can consume more than half of the total execution time but also are difficult to balance merely by a static load balancer. The computational cost of flash calculations in each gridblock heavily depends on the dynamic data such as pressure, temperature, and hydrocarbon composition. Thus, any static assignment of gridblocks may lead to dynamic load imbalance in unpredictable manners. A dynamic load balancer can often provide solutions for this difficulty. However, traditional techniques are inflexible and tedious to implement in legacy reservoir simulators. In this paper, we present a new approach to address dynamic load imbalance in parallel compositional simulation. It overdecomposes the reservoir model to assign each processor a bundle of subdomains. Processors treat these bundles of subdomains as virtual processes or user-level migratable threads that can be dynamically migrated across processors in the run-time system. This technique is shown to be capable of achieving better overlap between computation and communication for cache efficiency. We use this approach in a legacy reservoir simulator and demonstrate a reduction in the execution time of parallel compositional simulations while requiring minimal changes to the source code. Finally, it is shown that domain overdecomposition, together with a load balancer, can improve speedup from 29.27 to 62.38 on 64 physical processors for a realistic simulation problem.


Author(s):  
Emily Earl ◽  
Hadi Mohammadi

Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.


2010 ◽  
Vol 163-167 ◽  
pp. 2404-2409 ◽  
Author(s):  
Bin Yang ◽  
Qi Lin Zhang

Recently, a modified Particle Swarm Optimizer (MLPSO) has been succeeded in solving truss topological optimization problems and competitive results are obtained. Since this optimizer belongs to evolutionary algorithm and plagued by high computational cost as measured by execution time, in order to reduce its execution time for solving large complex optimization problem, a parallel version for this optimizer is studied in this paper. This paper first gives an overview of PSO algorithm as well as the modified PSO, and then a design and an implementation of parallel PSO is proposed. The performance of the proposed algorithm is tested by two examples and promising speed-up rate is obtained. Final part is conclusion and outlook.


2009 ◽  
Vol 06 (01) ◽  
pp. 75-91
Author(s):  
GANESH S. HEGDE ◽  
G. M. MADHU

Faster convergence, better accuracy and improved stability of the solutions to fluid flow and heat transfer problems in CFD reduce the computational cost and time. The numerical solutions to partial differential equations governing the physical flow and heat phenomena, using computer software and hardware, have been obtained by various techniques which have been refined over the years. The numerical techniques have obtained the base in finite difference method (FDM) approximations derived from Taylor series expansion. Because of linearization, FDM approximations have truncation error creeping into the values of the partial derivatives, which projects an unrealistic picture of the final outcome of results in terms of accuracy, convergence and stability. As the prime objective of this paper, the minimization of truncation error is attempted with the aid of the interface theory (briefly described in the appendix) used as a computational treatment tool. In simple terms, the interface theory provides an optimal solution to all variables in a linear indeterminate system with redundancy in unknowns. The effort has converged in the form of Hegde's interface numerical technique (HINT), which is demonstrated on a quasi-one-dimensional nozzle flow, the physical behavior of which is described by the Navier–Stokes equation considered specific to the said case. HINT could successfully match the results of MacCormack's predictor–corrector method as far as the accuracy is concerned, but with less computational effort and higher productivity. To the knowledge of the authors, HINT may be considered both original and different for its kind in the vast developments in CFD.


Sign in / Sign up

Export Citation Format

Share Document