scholarly journals Computational Costs of Multi-Frontal Direct Solvers with Analysis-Suitable T-Splines

Symmetry ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 2070
Author(s):  
Anna Paszyńska ◽  
Maciej Paszyński

In this paper, we consider the computational cost of a multi-frontal direct solver used for the factorization of matrices resulting from a discretization of isogeometric analysis with T-splines, and analysis-suitable T-splines. We start from model projection or model heat transfer problems discretized over two-dimensional meshes, either uniformly refined or refined towards a point or an edge. These grids preserve several symmetries and they are the building blocks of more complicated grids constructed during adaptive isotropic refinement procedures. A large class of computational problems construct meshes refined towards point or edge singularities. We propose an ordering that permutes the matrix in a way that the computational cost of a multi-frontal solver executed on adaptive grids is linear. We show that analysis-suitable T-splines with our ordering, besides having other well-known advantages, also significantly reduce the computational cost of factorization with the multi-frontal direct solver. Namely, the factorization with N T-splines of order p over meshes refined to a point has a linear O(Np4) cost, and the factorization with T-splines on meshes refined to an edge has O(N2pp2) cost. We compare the execution time of the multi-frontal solver with our ordering to the Approximate Minimum Degree (AMD) and Cuthill–McKee orderings available in Octave.

Atmosphere ◽  
2018 ◽  
Vol 9 (11) ◽  
pp. 444 ◽  
Author(s):  
Jinxi Li ◽  
Jie Zheng ◽  
Jiang Zhu ◽  
Fangxin Fang ◽  
Christopher. Pain ◽  
...  

Advection errors are common in basic terrain-following (TF) coordinates. Numerous methods, including the hybrid TF coordinate and smoothing vertical layers, have been proposed to reduce the advection errors. Advection errors are affected by the directions of velocity fields and the complexity of the terrain. In this study, an unstructured adaptive mesh together with the discontinuous Galerkin finite element method is employed to reduce advection errors over steep terrains. To test the capability of adaptive meshes, five two-dimensional (2D) idealized tests are conducted. Then, the results of adaptive meshes are compared with those of cut-cell and TF meshes. The results show that using adaptive meshes reduces the advection errors by one to two orders of magnitude compared to the cut-cell and TF meshes regardless of variations in velocity directions or terrain complexity. Furthermore, adaptive meshes can reduce the advection errors when the tracer moves tangentially along the terrain surface and allows the terrain to be represented without incurring in severe dispersion. Finally, the computational cost is analyzed. To achieve a given tagging criterion level, the adaptive mesh requires fewer nodes, smaller minimum mesh sizes, less runtime and lower proportion between the node numbers used for resolving the tracer and each wavelength than cut-cell and TF meshes, thus reducing the computational costs.


2016 ◽  
Vol 2016 ◽  
pp. 1-15 ◽  
Author(s):  
Hongjie Guo ◽  
Guojun Dai ◽  
Jin Fan ◽  
Yifan Wu ◽  
Fangyao Shen ◽  
...  

This paper develops a mobile sensing system, the first system used in adaptive resolution urban air quality monitoring. In this system, we employ several taxis as sensor carries to collect originalPM2.5data and collect a variety of datasets, including meteorological data, traffic status data, and geographical data in the city. This paper also presents a novel method AG-PCEM (Adaptive Grid-Probabilistic Concentration Estimation Method) to infer thePM2.5concentration for undetected grids using dynamic adaptive grids. We gradually collect the measurements throughout a year using a prototype system in Xiasha District of Hangzhou City, China. Experimental data has verified that the proposed system can achieve good performance in terms of computational cost and accuracy. The computational cost of AG-PCEM is reduced by about 40.2% compared with a static grid method PCEM under the condition of reaching the close accuracy, and the accuracy of AG-PCEM is far superior as widely used artificial neural network (ANN) and Gaussian process (GP), enhanced by 38.8% and 14.6%, respectively. The system can be expanded to wide-range air quality monitor by adjusting the initial grid resolution, and our findings can tell citizens actual air quality and help official management find pollution sources.


Author(s):  
Quanfang Chen ◽  
Guang Chai ◽  
Bo Li

Carbon nanotubes (CNTs) are excellent multifunctional materials in terms of mechanical robustness, thermal, and electrical conductivities. These multifunctional properties, as well as the small size of the structures, make CNTs ideal building blocks in developing nanocomposites. However, the matrix materials and the fabrication processes are critical in achieving the expected multifunctional properties of a CNT-reinforced nanocomposite. This paper has proved that electrochemical co-deposition of a metallic nanocomposite is a good approach for achieving good interfacial bonding between CNTs and a metallic matrix. Good interfacial bonding between a single-walled carbon nanotube (SWCNT) and a copper matrix has been verified by enhanced fracture toughness (increased stickiness) and a shift in the Raman scattering spectra. For the Cu/SWCNT nanocomposite, the radial breath mode (RBM) has disappeared and the tangential or G-band has shifted and widened, which is an indication of better energy transport.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. S101-S117 ◽  
Author(s):  
Alba Ordoñez ◽  
Walter Söllner ◽  
Tilman Klüver ◽  
Leiv J. Gelius

Several studies have shown the benefits of including multiple reflections together with primaries in the structural imaging of subsurface reflectors. However, to characterize the reflector properties, there is a need to compensate for propagation effects due to multiple scattering and to properly combine the information from primaries and all orders of multiples. From this perspective and based on the wave equation and Rayleigh’s reciprocity theorem, recent works have suggested computing the subsurface image from the Green’s function reflection response (or reflectivity) by inverting a Fredholm integral equation in the frequency-space domain. By following Claerbout’s imaging principle and assuming locally reacting media, the integral equation may be reduced to a trace-by-trace deconvolution imaging condition. For a complex overburden and considering that the structure of the subsurface is angle-dependent, this trace-by-trace deconvolution does not properly solve the Fredholm integral equation. We have inverted for the subsurface reflectivity by solving the matrix version of the Fredholm integral equation at every subsurface level, based on a multidimensional deconvolution of the receiver wavefields with the source wavefields. The total upgoing pressure and the total filtered downgoing vertical velocity were used as receiver and source wavefields, respectively. By selecting appropriate subsets of the inverted reflectivity matrix and by performing an inverse Fourier transform over the frequencies, the process allowed us to obtain wavefields corresponding to virtual sources and receivers located in the subsurface, at a given level. The method has been applied on two synthetic examples showing that the computed reflectivity wavefields are free of propagation effects from the overburden and thus are suited to extract information of the image point location in the angular and spatial domains. To get the computational cost down, our approach is target-oriented; i.e., the reflectivity may only be computed in the area of most interest.


Author(s):  
Hyunseok Kim ◽  
Bunyodbek Ibrokhimov ◽  
Sanggil Kang

Deep Convolutional Neural Networks (CNNs) show remarkable performance in many areas. However, most of the applications require huge computational costs and massive memory, which are hard to obtain in devices with a relatively weak performance like embedded devices. To reduce the computational cost, and meantime, to preserve the performance of the trained deep CNN, we propose a new filter pruning method using an additional dataset derived by downsampling the original dataset. Our method takes advantage of the fact that information in high-resolution images is lost in the downsampling process. Each trained convolutional filter reacts differently to this information loss. Based on this, the importance of the filter is evaluated by comparing the gradient obtained from two different resolution images. We validate the superiority of our filter evaluation method using a VGG-16 model trained on CIFAR-10 and CUB-200-2011 datasets. The pruned network with our method shows an average of 2.66% higher accuracy in the latter dataset, compared to existing pruning methods when about 75% of the parameters are removed.


Author(s):  
A. D. Chowdhury ◽  
S. K. Bhattacharyya ◽  
C. P. Vendhan

The normal mode method is widely used in ocean acoustic propagation. Usually, finite difference and finite element methods are used in its solution. Recently, a method has been proposed for heterogeneous layered waveguides where the depth eigenproblem is solved using the classical Rayleigh–Ritz approximation. The method has high accuracy for low to high frequency problems. However, the matrices that appear in the eigenvalue problem for radial wavenumbers require numerical integration of the matrix elements since the sound speed and density profiles are numerically defined. In this paper, a technique is proposed to reduce the computational cost of the Rayleigh–Ritz method by expanding the sound speed profile in a Fourier series using nonlinear least square fit so that the integrals of the matrix elements can be computed in closed form. This technique is tested in a variety of problems and found to be sufficiently accurate in obtaining the radial wavenumbers as well as the transmission loss in a waveguide. The computational savings obtained by this approach is remarkable, the improvements being one or two orders of magnitude.


2019 ◽  
Vol 65 (3) ◽  
pp. 807-838 ◽  
Author(s):  
F. de Prenter ◽  
C. V. Verhoosel ◽  
E. H. van Brummelen ◽  
J. A. Evans ◽  
C. Messe ◽  
...  

AbstractIll-conditioning of the system matrix is a well-known complication in immersed finite element methods and trimmed isogeometric analysis. Elements with small intersections with the physical domain yield problematic eigenvalues in the system matrix, which generally degrades efficiency and robustness of iterative solvers. In this contribution we investigate the spectral properties of immersed finite element systems treated by Schwarz-type methods, to establish the suitability of these as smoothers in a multigrid method. Based on this investigation we develop a geometric multigrid preconditioner for immersed finite element methods, which provides mesh-independent and cut-element-independent convergence rates. This preconditioning technique is applicable to higher-order discretizations, and enables solving large-scale immersed systems at a computational cost that scales linearly with the number of degrees of freedom. The performance of the preconditioner is demonstrated for conventional Lagrange basis functions and for isogeometric discretizations with both uniform B-splines and locally refined approximations based on truncated hierarchical B-splines.


Author(s):  
Lifang Zhou ◽  
Hongmei Li ◽  
Weisheng Li ◽  
Bangjun Lei ◽  
Lu Wang

Accurate scale estimation of the target plays an important role in object tracking. Most state-of-the-art methods estimate the target size by employing an exhaustive scale search. These methods can achieve high accuracy but suffer significantly from large computational cost. In this paper, we first propose an adaptive scale search strategy with the scale selection factor instead of an exhaustive scale search. This proposed strategy contributes to reducing computational costs by adaptive sampling. Furthermore, the boundary effects of correlation filters are suppressed by investigating background information so that the accuracy of the proposed tracker can be boosted. Experiments’ empirical evaluations of 61 challenging benchmark sequences demonstrate that the overall tracking performance of the proposed tracker is very successfully improved. Moreover, our method obtains the top rank in performance by outperforming 17 state-of-the-art trackers on OTB2013.


Author(s):  
Aldo Roberto Cruces Girón ◽  
Fabrício Nogueira Corrêa ◽  
Breno Pinheiro Jacob

Analysis techniques and numerical formulations are available in a variety for mooring and riser designers. They are applied in the different stages of the design processes of floating production systems (FPS) by taking advantage of both the accuracy of results and the computational costs. In early design stages, the low computational cost is more valued with the aim of obtaining fast results and taking decisions. So in these stages it is common to use uncoupled analysis. On the other hand, in more advanced design stages, the accuracy of results is more valued, for which the use of coupled analysis is adequate. However, it can lead to excessive computing times. To overcome such high computational costs, new formulations have been proposed with the aim of obtaining results similar to a coupled analysis, but with low computational costs. One of these formulations is referred as the semi-coupled scheme (S-C). Its main characteristic is that it combines the advantages of uncoupled and coupled analysis techniques. In this way, analyses can be performed with very fast execution times and results are superior to those obtained by the classical uncoupled analysis. This work presents an evaluation of the S-C scheme. The evaluation is made by comparing their results with the results of coupled analyses. Both type of analysis were applied in a representative deep water platform. The results show that the S-C scheme have the potentially to provide results with appropriate precision with very low computational times. In this way, the S-C scheme represents an attractive procedure to be applied in early and intermediate stages of the design process of FPS.


2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Jengnan Tzeng

The singular value decomposition (SVD) is a fundamental matrix decomposition in linear algebra. It is widely applied in many modern techniques, for example, high- dimensional data visualization, dimension reduction, data mining, latent semantic analysis, and so forth. Although the SVD plays an essential role in these fields, its apparent weakness is the order three computational cost. This order three computational cost makes many modern applications infeasible, especially when the scale of the data is huge and growing. Therefore, it is imperative to develop a fast SVD method in modern era. If the rank of matrix is much smaller than the matrix size, there are already some fast SVD approaches. In this paper, we focus on this case but with the additional condition that the data is considerably huge to be stored as a matrix form. We will demonstrate that this fast SVD result is sufficiently accurate, and most importantly it can be derived immediately. Using this fast method, many infeasible modern techniques based on the SVD will become viable.


Sign in / Sign up

Export Citation Format

Share Document