Random Node Pair Sampling-Based Estimation of Average Path Lengths in Networks

Author(s):  
Luis E. Castro ◽  
Nazrul I. Shaikh

This article describes how the average path length (APL) of a network is an important metric that provides insights on the interconnectivity in a network and how much time and effort would be required for search and navigation on that network. However, the estimation of APL is time-consuming as its computational complexity scales nonlinearly with the network size. In this article, the authors develop a computationally efficient random node pair sampling algorithm that enables the estimation of APL with a specified precision and confidence. The proposed sampling algorithms provide a speed-up factor ranging from 240-750 for networks with more than 100,000 nodes. The authors also find that the computational time required for estimation APL does not necessarily increase with the network size; it shows an inverted U shape instead.

Author(s):  
Reza Alizadeh ◽  
Liangyue Jia ◽  
Anand Balu Nellippallil ◽  
Guoxin Wang ◽  
Jia Hao ◽  
...  

AbstractIn engineering design, surrogate models are often used instead of costly computer simulations. Typically, a single surrogate model is selected based on the previous experience. We observe, based on an analysis of the published literature, that fitting an ensemble of surrogates (EoS) based on cross-validation errors is more accurate but requires more computational time. In this paper, we propose a method to build an EoS that is both accurate and less computationally expensive. In the proposed method, the EoS is a weighted average surrogate of response surface models, kriging, and radial basis functions based on overall cross-validation error. We demonstrate that created EoS is accurate than individual surrogates even when fewer data points are used, so computationally efficient with relatively insensitive predictions. We demonstrate the use of an EoS using hot rod rolling as an example. Finally, we include a rule-based template which can be used for other problems with similar requirements, for example, the computational time, required accuracy, and the size of the data.


Author(s):  
Zhonghai Jin ◽  
Andrew Lacis

AbstractA computationally efficient method is presented to account for the horizontal cloud inhomogeneity by using a radiatively equivalent plane parallel homogeneous (PPH) cloud. The algorithm can accurately match the calculations of the reference (rPPH) independent column approximation (ICA) results, but use only the same computational time required for a single plane parallel computation. The effective optical depth of this synthetic sPPH cloud is derived by exactly matching the direct transmission to that of the inhomogeneous ICA cloud. The effective scattering asymmetry factor is found from a pre-calculated albedo inverse look-up-table that is allowed to vary over the range from -1.0 to 1.0. In the special cases of conservative scattering and total absorption, the synthetic method is exactly equivalent to the ICA, with only a small bias (about 0.2% in flux) relative to ICA due to imperfect interpolation in using the look-up tables. In principlel, the ICA albedo can be approximated accurately regardless of cloud inhomogeneity. For a more complete comparison, the broadband shortwave albedo and transmission calculated from the synthetic sPPH cloud and averaged over all incident directions, have the RMS biases of 0.26% and 0.76%, respectively, for inhomogeneous clouds over a wide variation of particle size. The advantages of the synthetic PPH method are that (1) it is not required that all the cloud subcolumns have uniform microphysical characteristic, (2) it is applicable to any 1D radiative transfer scheme, and (3) it can handle arbitrary cloud optical depth distributions and an arbitrary number of cloud subcolumns with uniform computational efficiency.


2020 ◽  
Vol 245 ◽  
pp. 02001
Author(s):  
Marilena Bandieramonte ◽  
John Derek Chapman ◽  
Justin Chiu ◽  
Heather Gray ◽  
Miha Muskinja

Estimations of the CPU resources that will be needed to produce simulated data for the future runs of the ATLAS experiment at the LHC, indicate a compelling need to speed-up the process to reduce the computational time required. While different fast simulation projects are ongoing, full Geant4 based simulation will still be heavily used and is expected to consume the biggest portion of the total estimated processing time. In order to run effectively on modern architectures and profit from multi-core designs a migration of the Athena framework to a multi-threading processing model was performed. A multi-threaded simulation based on AthenaMT and Geant4MT, enables substantial decreases in the memory footprint of jobs, largely from shared geometry and cross-section tables. This approach scales better with respect to the multi-processing approach (AthenaMP) especially on the architectures that are foreseen to be used in the next LHC runs. In these proceedings we report about the status of the multi-threaded simulation in ATLAS, focusing on the different challenges of its validation process. We demonstrate the different tools and strategies that have been used for debugging multi-threaded runs versus the corresponding sequential ones, in order to have a fully reproducible and consistent simulation result.


Author(s):  
Jeff Irwin ◽  
P. Michaleris

A line input model has been developed which makes the accurate modeling of powder bed processes more computationally efficient. Goldak’s ellipsoidal model has been used extensively to model heat sources in additive manufacturing, including lasers and electron beams. To accurately model the motion of the heat source, the simulation time increments must be small enough such that the source moves a distance smaller than its radius over the course of each increment. When the source radius is small and its velocity is large, a strict condition is imposed on the size of time increments regardless of any stability criteria. In powder bed systems, where radii of 0.1 mm and velocities of 500 mm/s are typical, a significant computational burden can result. The line heat input model relieves this burden by averaging the heat source over its path. This model allows the simulation of an entire heat source scan in just one time increment. However, such large time increments can lead to inaccurate results. Instead, the scan is broken up into several linear segments, each of which is applied in one increment. In this work, time increments are found that yield accurate results (less than 10 % displacement error) and require less than 1/10 of the CPU time required by Goldak’s moving source model. A dimensionless correlation is given that can be used to determine the necessary time increment size that will greatly decrease the computational time required for any powder bed simulation while maintaining accuracy.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ahmad H. Bokhari ◽  
Martin Berggren ◽  
Daniel Noreland ◽  
Eddie Wadbro

AbstractA subwoofer generates the lowest frequency range in loudspeaker systems. Subwoofers are used in audio systems for live concerts, movie theatres, home theatres, gaming consoles, cars, etc. During the last decades, numerical simulations have emerged as a cost- and time-efficient complement to traditional experiments in the design process of different products. The aim of this study is to reduce the computational time of simulating the average response for a given subwoofer design. To this end, we propose a hybrid 2D–3D model that reduces the computational time significantly compared to a full 3D model. The hybrid model describes the interaction between different subwoofer components as interacting modules whose acoustic properties can partly be pre-computed. This allows us to efficiently compute the performance of different subwoofer design layouts. The results of the hybrid model are validated against both a lumped element model and a full 3D model over a frequency band of interest. The hybrid model is found to be both accurate and computationally efficient.


1999 ◽  
Vol 122 (1) ◽  
pp. 182-190 ◽  
Author(s):  
S. V. Kamarthi ◽  
S. T. S. Bukkapatnam ◽  
S. Hsieh

This paper presents an analytical model of the tool path for staircase traversal of convex polygonal surfaces, and an algorithm—referred to as OPTPATH—developed based on the model to find the sweep angle that gives a near optimal tool path length. The OPTPATH algorithm can be used for staircase traversal with or without (i) overlaps between successive sweep passes, and (ii) rapid traversal along edge passes. This flexibility of OPTPATH renders it applicable not only to conventional operations such as face and pocket milling, but also to other processes such as robotic deburring, rapid prototyping, and robotic spray painting. The effective tool path lengths provided by OPTPATH are compared with those given by the following two algorithms: (i) a common industrial heuristic—referred to as the IH algorithm—and (ii) an algorithm proposed by Prabhu et al. (Prabhu, P. V., Gramopadhye, A. K., and Wang, H. P., 1990, Int. J. Prod. Res., 28, No. 1, pp. 101–130) referred to as PGW algorithm. This comparison is conducted using 100 randomly generated convex polygons of different shapes and a set of seven different tool diameters. It is found that OPTPATH performs better than both the IH as well as PGW algorithms. The superiority of OPTPATH over the two algorithms becomes more pronounced for large tool diameters. [S1087-1357(00)71501-2]


2008 ◽  
Vol 130 (10) ◽  
Author(s):  
C. Caliot ◽  
G. Flamant ◽  
M. El Hafi ◽  
Y. Le Maoult

This paper deals with the comparison of spectral narrow band models based on the correlated-K (CK) approach in the specific area of remote sensing of plume signatures. The CK models chosen may or may not include the fictitious gas (FG) idea and the single-mixture-gas assumption (SMG). The accuracy of the CK and the CK-SMG as well as the CKFG and CKFG-SMG models are compared, and the influence of the SMG assumption is inferred. The errors induced by each model are compared in a sensitivity study involving the plume thickness and the atmospheric path length as parameters. This study is conducted in two remote-sensing situations with different absolute pressures at sea level (105Pa) and at high altitude (16.6km, 104Pa). The comparisons are done on the basis of the error obtained for the integrated intensity while leaving a line of sight that is computed in three common spectral bands: 2000–2500cm−1, 3450–3850cm−1, and 3850–4150cm−1. In most situations, the SMG assumption induces negligible differences. Furthermore, compared to the CKFG model, the CKFG-SMG model results in a reduction of the computational time by a factor of 2.


Author(s):  
Siyao Luan ◽  
Deborah L. Thurston ◽  
Madhav Arora ◽  
James T. Allison

In some cases, the level of effort required to formulate and solve an engineering design problem as a mathematical optimization problem is significant, and the potential improved design performance may not be worth the excessive effort. In this article we address the tradeoffs associated with formulation and modeling effort. Here we define three core elements (dimensions) of design formulations: design representation, comparison metrics, and predictive model. Each formulation dimension offers opportunities for the design engineer to balance the expected quality of the solution with the level of effort and time required to reach that solution. This paper demonstrates how using guidelines can be used to help create alternative formulations for the same underlying design problem, and then how the resulting solutions can be evaluated and compared. Using a vibration absorber design example, the guidelines are enumerated, explained, and used to compose six alternative optimization formulations, featuring different objective functions, decision variables, and constraints. The six alternative optimization formulations are subsequently solved, and their scores reflecting their complexity, computational time, and solution quality are quantified and compared. The results illustrate the unavoidable tradeoffs among these three attributes. The best formulation depends on the set of tradeoffs that are best in that situation.


Author(s):  
Feng Jie Zheng ◽  
Fu Zheng Qu ◽  
Xue Guan Song

Reservoir-pipe-valve (RPV) systems are widely used in many industrial process. The pressure in an RPV system plays an important role in the safe operation of the system, especially during the sudden operation such as rapid valve opening/closing. To investigate the pressure especially the pressure fluctuation in an RPV system, a multidimensional and multiscale model combining the method of characteristics (MOC) and computational fluid dynamics (CFD) method is proposed. In the model, the reservoir is modeled by a zero-dimensional virtual point, the pipe is modeled by a one-dimensional MOC, and the valve is modeled by a three-dimensional CFD model. An interface model is used to connect the multidimensional and multiscale model. Based on the model, a transient simulation of the turbulent flow in an RPV system is conducted, in which not only the pressure fluctuation in the pipe but also the detailed pressure distribution in the valve are obtained. The results show that the proposed model is in good agreement with the full CFD model in both large-scale and small-scale spaces. Moreover, the proposed model is more computationally efficient than the CFD model, which provides a feasibility in the analysis of complex RPV system within an affordable computational time.


Sign in / Sign up

Export Citation Format

Share Document