acceleration techniques
Recently Published Documents


TOTAL DOCUMENTS

277
(FIVE YEARS 63)

H-INDEX

25
(FIVE YEARS 3)

2022 ◽  
Author(s):  
Juliana Gómez Arana ◽  
Diego Rey ◽  
Héctor Ríos ◽  
María Antonia Álvarez ◽  
Lucia Cevidanes ◽  
...  

ABSTRACT Objectives To evaluate root resorption of lower incisors and canines quantitatively in a group of patients who underwent orthodontic treatment with piezocision and/or a collagen reinforcement technique with a fully resorbable three-dimensional (3D) collagen xenograft matrix compared with a control group. Materials and Methods The study sample of this secondary analysis consisted of 32 periodontally healthy patients with angle Class I malocclusion or mild Class II or III malocclusion and moderate irregularity index scores who underwent orthodontic treatment and had before (T0) and after treatment (T1) cone-beam computed tomography scans. Root resorption of lower incisors and canines was assessed quantitatively in the following four groups: the control group received orthodontic treatment without piezocision, experimental group 1 received orthodontic treatment with piezocision, experimental group 2 received orthodontic treatment with piezocision and a 3D collagen matrix, and experimental group 3 received orthodontic treatment with a 3D collagen matrix. Results An overall statistically significant decrease in root length from T0 to T1 for all groups was observed (P < .05). However, there was no significant difference among the groups in the amount of root length decrease from T0 to T1. Conclusions Orthodontic treatment combined with piezocision does not increase the risk of root resorption of lower incisors and canines when compared with orthodontic treatment without acceleration techniques. More studies with larger samples should be undertaken to confirm these results.


Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3118
Author(s):  
Eduardo Alcaín ◽  
Pedro R. Fernández ◽  
Rubén Nieto ◽  
Antonio S. Montemayor ◽  
Jaime Vilas ◽  
...  

Medical imaging is considered one of the most important advances in the history of medicine and has become an essential part of the diagnosis and treatment of patients. Earlier prediction and treatment have been driving the acquisition of higher image resolutions as well as the fusion of different modalities, raising the need for sophisticated hardware and software systems for medical image registration, storage, analysis, and processing. In this scenario and given the new clinical pipelines and the huge clinical burden of hospitals, these systems are often required to provide both highly accurate and real-time processing of large amounts of imaging data. Additionally, lowering the prices of each part of imaging equipment, as well as its development and implementation, and increasing their lifespan is crucial to minimize the cost and lead to more accessible healthcare. This paper focuses on the evolution and the application of different hardware architectures (namely, CPU, GPU, DSP, FPGA, and ASIC) in medical imaging through various specific examples and discussing different options depending on the specific application. The main purpose is to provide a general introduction to hardware acceleration techniques for medical imaging researchers and developers who need to accelerate their implementations.


2021 ◽  
Vol 8 (2) ◽  
pp. 177-198
Author(s):  
Wenshi Wu ◽  
Beibei Wang ◽  
Ling-Qi Yan

AbstractParticipating media are frequent in real-world scenes, whether they contain milk, fruit juice, oil, or muddy water in a river or the ocean. Incoming light interacts with these participating media in complex ways: refraction at boundaries and scattering and absorption inside volumes. The radiative transfer equation is the key to solving this problem. There are several categories of rendering methods which are all based on this equation, but using different solutions. In this paper, we introduce these groups, which include volume density estimation based approaches, virtual point/ray/beam lights, point based approaches, Monte Carlo based approaches, acceleration techniques, accurate single scattering methods, neural network based methods, and spatially-correlated participating media related methods. As well as discussing these methods, we consider the challenges and open problems in this research area.


2021 ◽  
Vol 11 (04) ◽  
pp. 1-11
Author(s):  
Wanwan Li

In mechanical engineering educations, simulating fluid thermodynamics is rather helpful for students to understand the fluid’s natural behaviors. However, rendering both high-quality and realtime simulations for fluid dynamics are rather challenging tasks due to their intensive computations. So, in order to speed up the simulations, we have taken advantage of GPU acceleration techniques to simulate interactive fluid thermodynamics in real-time. In this paper, we present an elegant, basic, but practical OpenGL/SL framework for fluid simulation with a heat map rendering. By solving Navier-Stokes equations coupled with the heat diffusion equation, we validate our framework through some real-case studies of the smoke-like fluid rendering such as their interactions with moving obstacles and their heat diffusion effects. As shown in Fig. 1, a group of experimental results demonstrates that our GPU-accelerated solver of Navier-Stokes equations with heat transfer could give the observers impressive real-time and realistic rendering results.


2021 ◽  
Author(s):  
Yifan Zhou ◽  
Jiamin Jiang ◽  
Pavel Tomin

Abstract The sequential fully implicit (SFI) scheme was introduced (Jenny et al. 2006) for solving coupled flow and transport problems. Each time step for SFI consists of an outer loop, in which there are inner Newton loops to implicitly and sequentially solve the pressure and transport sub-problems. In standard SFI, the sub-problems are usually fully solved at each outer iteration. This can result in wasted computations that contribute little towards the coupled solution. The issue is known as ‘over-solving’. Our objective is to minimize the cost while maintain or improve the convergence of SFI by preventing ‘over-solving’. We first developed a framework based on the nonlinear acceleration techniques (Jiang and Tchelepi 2019) to ensure robust outer-loop convergence. We then developed inexact-type methods that prevent ‘over-solving’ and minimize the cost of inner solvers for SFI. The motivation is similar to the inexact Newton method, where the inner (linear) iterations are controlled in a way that the outer (Newton) convergence is not degraded, but the overall computational effort is greatly reduced. We proposed an adaptive strategy that provides relative tolerances based on the convergence rates of the coupled problem. The developed inexact SFI method was tested using numerous simulation studies. We compared different strategies such as fixed relaxations on absolute and relative tolerances for the inner solvers. The test cases included synthetic as well as real-field models with complex flow physics and high heterogeneity. The results show that the basic SFI method is quite inefficient. When the coupling is strong, we observed that the outer convergence is mainly restricted by the initial residuals of the sub-problems. It was observed that the feedback from one inner solver can cause the residual of the other to rebound to a much higher level. Away from a coupled solution, additional accuracy achieved in inner solvers is wasted, contributing to little or no reduction of the overall residual. By comparison, the inexact SFI method adaptively provided the relative tolerances adequate for the sub-problems. We show across a wide range of flow conditions that the inexact SFI can effectively resolve the ‘over-solving’ issue, and thus greatly improve the overall performance. The novel information of this paper includes: 1) we found that for SFI, there is no need for one sub-problem to strive for perfection (‘over-solving’), while the coupled residual remains high because of the other sub-problem; 2) a novel inexact SFI method was developed to prevent ‘over-solving’ and minimize the cost of inner solvers; 3) an adaptive strategy was proposed for relative tolerances based on the convergence rates of the coupled problem; and 4) a novel SFI framework was developed based on the nonlinear acceleration techniques to ensure robust outer-loop convergence.


Author(s):  
Mustafa C. Camur ◽  
Thomas Sharkey ◽  
Chrysafis Vogiatzis

We consider the problem of identifying the induced star with the largest cardinality open neighborhood in a graph. This problem, also known as the star degree centrality (SDC) problem, is shown to be [Formula: see text]-complete. In this work, we first propose a new integer programming (IP) formulation, which has a smaller number of constraints and nonzero coefficients in them than the existing formulation in the literature. We present classes of networks in which the problem is solvable in polynomial time and offer a new proof of [Formula: see text]-completeness that shows the problem remains [Formula: see text]-complete for both bipartite and split graphs. In addition, we propose a decomposition framework that is suitable for both the existing and our formulations. We implement several acceleration techniques in this framework, motivated by techniques used in Benders decomposition. We test our approaches on networks generated based on the Barabási–Albert, Erdös–Rényi, and Watts–Strogatz models. Our decomposition approach outperforms solving the IP formulations in most of the instances in terms of both solution time and quality; this is especially true for larger and denser graphs. We then test the decomposition algorithm on large-scale protein–protein interaction networks, for which SDC is shown to be an important centrality metric. Summary of Contribution: In this study, we first introduce a new integer programming (NIP) formulation for the star degree centrality (SDC) problem in which the goal is to identify the induced star with the largest open neighborhood. We then show that, although the SDC can be efficiently solved in tree graphs, it remains [Formula: see text]-complete in both split and bipartite graphs via a reduction performed from the set cover problem. In addition, we implement a decomposition algorithm motivated by Benders decomposition together with several acceleration techniques to both the NIP formulation and the existing formulation in the literature. Our experimental results indicate that the decomposition implementation on the NIP is the best solution method in terms of both solution time and quality.


Author(s):  
A. Fischer ◽  
A. F. Izmailov ◽  
M. Jelitte

AbstractIt is well-recognized that in the presence of singular (and in particular nonisolated) solutions of unconstrained or constrained smooth nonlinear equations, the existence of critical solutions has a crucial impact on the behavior of various Newton-type methods. On the one hand, it has been demonstrated that such solutions turn out to be attractors for sequences generated by these methods, for wide domains of starting points, and with a linear convergence rate estimate. On the other hand, the pattern of convergence to such solutions is quite special, and allows for a sharp characterization which serves, in particular, as a basis for some known acceleration techniques, and for the proof of an asymptotic acceptance of the unit stepsize. The latter is an essential property for the success of these techniques when combined with a linesearch strategy for globalization of convergence. This paper aims at extensions of these results to piecewise smooth equations, with applications to corresponding reformulations of nonlinear complementarity problems.


2021 ◽  
Vol 18 (181) ◽  
pp. 20210241
Author(s):  
Jesse A. Sharp ◽  
Kevin Burrage ◽  
Matthew J. Simpson

Optimal control theory provides insight into complex resource allocation decisions. The forward–backward sweep method (FBSM) is an iterative technique commonly implemented to solve two-point boundary value problems arising from the application of Pontryagin’s maximum principle (PMP) in optimal control. The FBSM is popular in systems biology as it scales well with system size and is straightforward to implement. In this review, we discuss the PMP approach to optimal control and the implementation of the FBSM. By conceptualizing the FBSM as a fixed point iteration process, we leverage and adapt existing acceleration techniques to improve its rate of convergence. We show that convergence improvement is attainable without prohibitively costly tuning of the acceleration techniques. Furthermore, we demonstrate that these methods can induce convergence where the underlying FBSM fails to converge. All code used in this work to implement the FBSM and acceleration techniques is available on GitHub at https://github.com/Jesse-Sharp/Sharp2021 .


Author(s):  
Rui Hu ◽  
Yanmin Gong ◽  
Yuanxiong Guo

Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other. However, data locality does not provide sufficient privacy protection, and it is desirable to facilitate FL with rigorous differential privacy (DP) guarantee. Existing DP mechanisms would introduce random noise with magnitude proportional to the model size, which can be quite large in deep neural networks. In this paper, we propose a new FL framework with sparsification-amplified privacy. Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee. Since sparsification would increase the number of communication rounds required to achieve a certain target accuracy, which is unfavorable for DP guarantee, we further introduce acceleration techniques to help reduce the privacy cost. We rigorously analyze the convergence of our approach and utilize Renyi DP to tightly account the end-to-end DP guarantee. Extensive experiments on benchmark datasets validate that our approach outperforms previous differentially-private FL approaches in both privacy guarantee and communication efficiency.


2021 ◽  
Vol 263 (4) ◽  
pp. 2665-2673
Author(s):  
Thomas Judd ◽  
Stefan Weigand ◽  
Jochen Schaal

The analysis of noise and acoustics in indoor spaces is often performed with geometrical methods from the ray-tracing family, such as the sound particle method. In general, these offer an acceptable balance between physical accuracy and computational effort, but models with large numbers of objects and high levels of detail can lead to long waits for results. In this paper, we consider methods to assist with the efficient analysis of such situations in the context of the sound particle diffraction model. A modern open-plan office and a large cathedral are used as example projects. We look at space partitioning strategies, adaptive placement of receivers in the form of mesh noise maps, and graphics-card-style hardware acceleration techniques, along with iterative modelling methods. The role of geometrical detail in the context of uncertainties in the input data, such as absorption and scattering coefficients, is also studied. From this, we offer a range of recommendations regarding the level-of-detail in acoustic modelling, including consideration of issues such as seating, tables, and curved surfaces.


Sign in / Sign up

Export Citation Format

Share Document