scholarly journals A General-Purpose Hierarchical Mesh Partitioning Method with Node Balancing Strategies for Large-Scale Numerical Simulations

Author(s):  
Fande Kong ◽  
Roy H. Stogner ◽  
Derek R. Gaston ◽  
John W. Peterson ◽  
Cody J. Permann ◽  
...  
Impact ◽  
2019 ◽  
Vol 2019 (10) ◽  
pp. 44-46
Author(s):  
Masato Edahiro ◽  
Masaki Gondo

The pace of technology's advancements is ever-increasing and intelligent systems, such as those found in robots and vehicles, have become larger and more complex. These intelligent systems have a heterogeneous structure, comprising a mixture of modules such as artificial intelligence (AI) and powertrain control modules that facilitate large-scale numerical calculation and real-time periodic processing functions. Information technology expert Professor Masato Edahiro, from the Graduate School of Informatics at the Nagoya University in Japan, explains that concurrent advances in semiconductor research have led to the miniaturisation of semiconductors, allowing a greater number of processors to be mounted on a single chip, increasing potential processing power. 'In addition to general-purpose processors such as CPUs, a mixture of multiple types of accelerators such as GPGPU and FPGA has evolved, producing a more complex and heterogeneous computer architecture,' he says. Edahiro and his partners have been working on the eMBP, a model-based parallelizer (MBP) that offers a mapping system as an efficient way of automatically generating parallel code for multi- and many-core systems. This ensures that once the hardware description is written, eMBP can bridge the gap between software and hardware to ensure that not only is an efficient ecosystem achieved for hardware vendors, but the need for different software vendors to adapt code for their particular platforms is also eliminated.


1983 ◽  
Vol 38 ◽  
pp. 1-9
Author(s):  
Herbert F. Weisberg

We are now entering a new era of computing in political science. The first era was marked by punched-card technology. Initially, the most sophisticated analyses possible were frequency counts and tables produced on a counter-sorter, a machine that specialized in chewing up data cards. By the early 1960s, batch processing on large mainframe computers became the predominant mode of data analysis, with turnaround time of up to a week. By the late 1960s, turnaround time was cut down to a matter of a few minutes and OSIRIS and then SPSS (and more recently SAS) were developed as general-purpose data analysis packages for the social sciences. Even today, use of these packages in batch mode remains one of the most efficient means of processing large-scale data analysis.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Seyed Hossein Jafari ◽  
Amir Mahdi Abdolhosseini-Qomi ◽  
Masoud Asadpour ◽  
Maseud Rahgozar ◽  
Naser Yazdani

AbstractThe entities of real-world networks are connected via different types of connections (i.e., layers). The task of link prediction in multiplex networks is about finding missing connections based on both intra-layer and inter-layer correlations. Our observations confirm that in a wide range of real-world multiplex networks, from social to biological and technological, a positive correlation exists between connection probability in one layer and similarity in other layers. Accordingly, a similarity-based automatic general-purpose multiplex link prediction method—SimBins—is devised that quantifies the amount of connection uncertainty based on observed inter-layer correlations in a multiplex network. Moreover, SimBins enhances the prediction quality in the target layer by incorporating the effect of link overlap across layers. Applying SimBins to various datasets from diverse domains, our findings indicate that SimBins outperforms the compared methods (both baseline and state-of-the-art methods) in most instances when predicting links. Furthermore, it is discussed that SimBins imposes minor computational overhead to the base similarity measures making it a potentially fast method, suitable for large-scale multiplex networks.


i-com ◽  
2020 ◽  
Vol 19 (2) ◽  
pp. 139-151
Author(s):  
Thomas Schmidt ◽  
Miriam Schlindwein ◽  
Katharina Lichtner ◽  
Christian Wolff

AbstractDue to progress in affective computing, various forms of general purpose sentiment/emotion recognition software have become available. However, the application of such tools in usability engineering (UE) for measuring the emotional state of participants is rarely employed. We investigate if the application of sentiment/emotion recognition software is beneficial for gathering objective and intuitive data that can predict usability similar to traditional usability metrics. We present the results of a UE project examining this question for the three modalities text, speech and face. We perform a large scale usability test (N = 125) with a counterbalanced within-subject design with two websites of varying usability. We have identified a weak but significant correlation between text-based sentiment analysis on the text acquired via thinking aloud and SUS scores as well as a weak positive correlation between the proportion of neutrality in users’ voice and SUS scores. However, for the majority of the output of emotion recognition software, we could not find any significant results. Emotion metrics could not be used to successfully differentiate between two websites of varying usability. Regression models, either unimodal or multimodal could not predict usability metrics. We discuss reasons for these results and how to continue research with more sophisticated methods.


Author(s):  
M. V. Pham ◽  
F. Plourde ◽  
S. K. Doan

Heat transfer enhancement is a subject of major concern in numerous fields of industry and research. Having received undivided attention over the years, it is still studied worldwide. Given the exponential growth of computing power, large-scale numerical simulations are growing steadily more realistic, and it is now possible to obtain accurate time-dependent solutions with far fewer preliminary assumptions about the problems. As a result, an increasingly wide range of physics is now open for exploration. More specifically, it is time to take full advantage of large eddy simulation technique so as to describe heat transfer in staggered parallel-plate flows. In fact, from simple theory through experimental results, it has been demonstrated that surface interruption enhances heat transfer. Staggered parallel-plate geometries are of great potential interest, and yet many numerical works dedicated to them have been tarnished by excessively simple assumptions. That is to say, numerical simulations have generally hypothesized lengthwise periodicity, even though flows are not periodic; moreover, the LES technique has not been employed with sufficient frequency. Actually, our primary objective is to analyze turbulent influence with regard to heat transfers in staggered parallel-plate fin geometries. In order to do so, we have developed a LES code, and numerical results are compared with regard to several grid mesh resolutions. We have focused mainly upon identification of turbulent structures and their role in heat transfer enhancement. Another key point involves the distinct roles of boundary restart and the vortex shedding mechanism on heat transfer and friction factor.


2010 ◽  
Vol 662 ◽  
pp. 409-446 ◽  
Author(s):  
G. SILANO ◽  
K. R. SREENIVASAN ◽  
R. VERZICCO

We summarize the results of an extensive campaign of direct numerical simulations of Rayleigh–Bénard convection at moderate and high Prandtl numbers (10−1 ≤ Pr ≤ 104) and moderate Rayleigh numbers (105 ≤ Ra ≤ 109). The computational domain is a cylindrical cell of aspect ratio Γ = 1/2, with the no-slip condition imposed on all boundaries. By scaling the numerical results, we find that the free-fall velocity should be multiplied by $1/\sqrt{{\it Pr}}$ in order to obtain a more appropriate representation of the large-scale velocity at high Pr. We investigate the Nusselt and the Reynolds number dependences on Ra and Pr, comparing the outcome with previous numerical and experimental results. Depending on Pr, we obtain different power laws of the Nusselt number with respect to Ra, ranging from Ra2/7 for Pr = 1 up to Ra0.31 for Pr = 103. The Nusselt number is independent of Pr. The Reynolds number scales as ${\it Re}\,{\sim}\,\sqrt{{\it Ra}}/{\it Pr}$, neglecting logarithmic corrections. We analyse the global and local features of viscous and thermal boundary layers and their scaling behaviours with respect to Ra and Pr, and with respect to the Reynolds and Péclet numbers. We find that the flow approaches a saturation state when Reynolds number decreases below the critical value, Res ≃ 40. The thermal-boundary-layer thickness increases slightly (instead of decreasing) when the Péclet number increases, because of the moderating influence of the viscous boundary layer. The simulated ranges of Ra and Pr contain steady, periodic and turbulent solutions. A rough estimate of the transition from the steady to the unsteady state is obtained by monitoring the time evolution of the system until it reaches stationary solutions. We find multiple solutions as long-term phenomena at Ra = 108 and Pr = 103, which, however, do not result in significantly different Nusselt numbers. One of these multiple solutions, even if stable over a long time interval, shows a break in the mid-plane symmetry of the temperature profile. We analyse the flow structures through the transitional phases by direct visualizations of the temperature and velocity fields. A wide variety of large-scale circulation and plume structures has been found. The single-roll circulation is characteristic only of the steady and periodic solutions. For other regimes at lower Pr, the mean flow generally consists of two opposite toroidal structures; at higher Pr, the flow is organized in the form of multi-jet structures, extending mostly in the vertical direction. At high Pr, plumes mainly detach from sheet-like structures. The signatures of different large-scale structures are generally well reflected in the data trends with respect to Ra, less in those with respect to Pr.


Author(s):  
Yuhang Zhang ◽  
Jiejie Li ◽  
Hongjian Zhou ◽  
Yiqun Hu ◽  
Suhang Ding ◽  
...  

2018 ◽  
Vol 146 (4) ◽  
pp. 1023-1044 ◽  
Author(s):  
David J. Purnell ◽  
Daniel J. Kirshbaum

The synoptic controls on orographic precipitation during the Olympics Mountains Experiment (OLYMPEX) are investigated using observations and numerical simulations. Observational precipitation retrievals for six warm-frontal (WF), six warm-sector (WS), and six postfrontal (PF) periods indicate that heavy precipitation occurred in both WF and WS periods, but the latter saw larger orographic enhancements. Such enhancements extended well upstream of the terrain in WF periods but were focused over the windward slopes in both PF and WS periods. Quasi-idealized simulations, constrained by OLYMPEX data, reproduce the key synoptic sensitivities of the OLYMPEX precipitation distributions and thus facilitate physical interpretation. These sensitivities are largely explained by three upstream parameters: the large-scale precipitation rate [Formula: see text], the impinging horizontal moisture flux I, and the low-level static stability. Both WF and WS events exhibit large [Formula: see text] and I, and thus, heavy orographic precipitation, which is greatly enhanced in amplitude and areal extent by the seeder–feeder process. However, the stronger stability of the WF periods, particularly within the frontal inversion (even when it lies above crest level), causes their precipitation enhancement to weaken and shift upstream. In contrast, the small [Formula: see text] and I, larger static stability, and absence of stratiform feeder clouds in the nominally unsaturated and convective PF events yield much lighter time- and area-averaged precipitation. Modest enhancements still occur over the windward slopes due to the local development and invigoration of shallow convective showers.


Sign in / Sign up

Export Citation Format

Share Document