scholarly journals An Empirical Evaluation of Force-Directed Graph Layout

2021 ◽  
Author(s):  
◽  
Roman Klapaukh

<p>Force-directed graph layout is a widely used algorithm for the automatic layout of graphs. Little experimental work has been done exploring the behaviour of the algorithm under a variety of conditions. This thesis carries out three large-scale metric-based experiments. The first explores how the core algorithm behaves under changes to initial conditions. The second looks at extending the force-directed layout algorithm with additional forces to reduce overlaps. The third develops a novel symmetry metric for graphs and uses that to explore the symmetries of graphs. This thesis also carries out a user study to show that the differences reported by metrics in the graphs are reflected in a difference in user performance when using graphs for a free-form selection task.</p>

2021 ◽  
Author(s):  
◽  
Roman Klapaukh

<p>Force-directed graph layout is a widely used algorithm for the automatic layout of graphs. Little experimental work has been done exploring the behaviour of the algorithm under a variety of conditions. This thesis carries out three large-scale metric-based experiments. The first explores how the core algorithm behaves under changes to initial conditions. The second looks at extending the force-directed layout algorithm with additional forces to reduce overlaps. The third develops a novel symmetry metric for graphs and uses that to explore the symmetries of graphs. This thesis also carries out a user study to show that the differences reported by metrics in the graphs are reflected in a difference in user performance when using graphs for a free-form selection task.</p>


2016 ◽  
Vol 2 (4) ◽  
Author(s):  
Payot Frédéric ◽  
Seiler Jean-Marie

In the field of severe accident, the description of corium progression events is mainly carried out using integral calculation codes. However, these tools are usually based on bounding assumptions because of the high complexity of phenomena. The limitations associated with bounding situations [1] (e.g., steady-state situations and instantaneous whole core relocation in the lower head) led CEA to develop an alternative approach to improve the phenomenological description of the melt progression. The methodology used to describe the corium progression was designed to cover the accidental situations from the core meltdown to the molten core–concrete interaction (MCCI). This phenomenological approach is based on the available data (including learnings from TMI-2) on physical models and knowledge about the corium behavior. It provides emerging trends and best-estimate intermediate situations. As different phenomena are unknown, but strongly coupled, uncertainties at large scale for the reactor application must be taken into account. Furthermore, the analysis is complicated by the fact that these configurations are most probably three-dimensional (3D), all the more so because 3D effects are expected to have significant consequences for the corium progression and the resulting vessel failure. Such an analysis of the in-vessel melt progression was carried out for the Unit 1 of the Fukushima Dai-ichi Nuclear Power Plant. The core uncovering kinetics governs the core degradation and impacts the appearance of the first molten corium inside the core. The initial conditions used to carry out this analysis are based on the available results derived from codes such as the MELCOR calculation code [2]. The core degradation could then follow different ways: (1) Axial progression of the debris and the molten fuel through the lower support plate, or (2) lateral progression of the molten fuel through the shroud. On the basis of the Bali program results [3] and the TMI-2 accident observations [4], this work is focused on the consequences of a lateral melt progression (not excluding an axial progression through the support plate). Analysis of the events and the associated time sequence will be detailed. Besides, this analysis identifies some number of issues. Random calculations and statistical analysis of the results could be performed with calculation codes such as LEONAR–PROCOR codes [5]. This work was presented in the frame of the OECD/NEA/CSNI Benchmark Study of the Accident at the Fukushima Dai-ichi Nuclear Power Station (BSAF) project [6]. During the years of 2012 and 2014, the purpose of this project was both to study, by means of severe accident codes, the Fukushima accident in the three crippled units, until 6 days from the reactor shutdown, and to give information about, in particular, the location and composition of core debris.


2021 ◽  
pp. 095679762097751
Author(s):  
Li Zhao ◽  
Jiaxin Zheng ◽  
Haiying Mao ◽  
Xinyi Yu ◽  
Jiacheng Ye ◽  
...  

Morality-based interventions designed to promote academic integrity are being used by educational institutions around the world. Although many such approaches have a strong theoretical foundation and are supported by laboratory-based evidence, they often have not been subjected to rigorous empirical evaluation in real-world contexts. In a naturalistic field study ( N = 296), we evaluated a recent research-inspired classroom innovation in which students are told, just prior to taking an unproctored exam, that they are trusted to act with integrity. Four university classes were assigned to a proctored exam or one of three types of unproctored exam. Students who took unproctored exams cheated significantly more, which suggests that it may be premature to implement this approach in college classrooms. These findings point to the importance of conducting ecologically valid and well-controlled field studies that translate psychological theory into practice when introducing large-scale educational reforms.


2021 ◽  
Vol 503 (4) ◽  
pp. 5638-5645
Author(s):  
Gábor Rácz ◽  
István Szapudi ◽  
István Csabai ◽  
László Dobos

ABSTRACT The classical gravitational force on a torus is anisotropic and always lower than Newton’s 1/r2 law. We demonstrate the effects of periodicity in dark matter only N-body simulations of spherical collapse and standard Lambda cold dark matter (ΛCDM) initial conditions. Periodic boundary conditions cause an overall negative and anisotropic bias in cosmological simulations of cosmic structure formation. The lower amplitude of power spectra of small periodic simulations is a consequence of the missing large-scale modes and the equally important smaller periodic forces. The effect is most significant when the largest mildly non-linear scales are comparable to the linear size of the simulation box, as often is the case for high-resolution hydrodynamical simulations. Spherical collapse morphs into a shape similar to an octahedron. The anisotropic growth distorts the large-scale ΛCDM dark matter structures. We introduce the direction-dependent power spectrum invariant under the octahedral group of the simulation volume and show that the results break spherical symmetry.


2020 ◽  
Vol 501 (2) ◽  
pp. 1755-1765
Author(s):  
Andrew Pontzen ◽  
Martin P Rey ◽  
Corentin Cadiou ◽  
Oscar Agertz ◽  
Romain Teyssier ◽  
...  

ABSTRACT We introduce a new method to mitigate numerical diffusion in adaptive mesh refinement (AMR) simulations of cosmological galaxy formation, and study its impact on a simulated dwarf galaxy as part of the ‘EDGE’ project. The target galaxy has a maximum circular velocity of $21\, \mathrm{km}\, \mathrm{s}^{-1}$ but evolves in a region that is moving at up to $90\, \mathrm{km}\, \mathrm{s}^{-1}$ relative to the hydrodynamic grid. In the absence of any mitigation, diffusion softens the filaments feeding our galaxy. As a result, gas is unphysically held in the circumgalactic medium around the galaxy for $320\, \mathrm{Myr}$, delaying the onset of star formation until cooling and collapse eventually triggers an initial starburst at z = 9. Using genetic modification, we produce ‘velocity-zeroed’ initial conditions in which the grid-relative streaming is strongly suppressed; by design, the change does not significantly modify the large-scale structure or dark matter accretion history. The resulting simulation recovers a more physical, gradual onset of star formation starting at z = 17. While the final stellar masses are nearly consistent ($4.8 \times 10^6\, \mathrm{M}_{\odot }$ and $4.4\times 10^6\, \mathrm{M}_{\odot }$ for unmodified and velocity-zeroed, respectively), the dynamical and morphological structure of the z = 0 dwarf galaxies are markedly different due to the contrasting histories. Our approach to diffusion suppression is suitable for any AMR zoom cosmological galaxy formation simulations, and is especially recommended for those of small galaxies at high redshift.


Universe ◽  
2021 ◽  
Vol 7 (8) ◽  
pp. 276
Author(s):  
Muhammad Zahid Mughal ◽  
Iftikhar Ahmad ◽  
Juan Luis García Guirao

In this review article, the study of the development of relativistic cosmology and the introduction of inflation in it as an exponentially expanding early phase of the universe is carried out. We study the properties of the standard cosmological model developed in the framework of relativistic cosmology and the geometric structure of spacetime connected coherently with it. The geometric properties of space and spacetime ingrained into the standard model of cosmology are investigated in addition. The big bang model of the beginning of the universe is based on the standard model which succumbed to failure in explaining the flatness and the large-scale homogeneity of the universe as demonstrated by observational evidence. These cosmological problems were resolved by introducing a brief acceleratedly expanding phase in the very early universe known as inflation. The cosmic inflation by setting the initial conditions of the standard big bang model resolves these problems of the theory. We discuss how the inflationary paradigm solves these problems by proposing the fast expansion period in the early universe. Further inflation and dark energy in fR modified gravity are also reviewed.


2011 ◽  
Vol 24 (12) ◽  
pp. 2963-2982 ◽  
Author(s):  
Andrea Alessandri ◽  
Andrea Borrelli ◽  
Silvio Gualdi ◽  
Enrico Scoccimarro ◽  
Simona Masina

Abstract This study investigates the predictability of tropical cyclone (TC) seasonal count anomalies using the Centro Euro-Mediterraneo per i Cambiamenti Climatici–Istituto Nazionale di Geofisica e Vulcanologia (CMCC-INGV) Seasonal Prediction System (SPS). To this aim, nine-member ensemble forecasts for the period 1992–2001 for two starting dates per year were performed. The skill in reproducing the observed TC counts has been evaluated after the application of a TC location and tracking detection method to the retrospective forecasts. The SPS displays good skill in predicting the observed TC count anomalies, particularly over the tropical Pacific and Atlantic Oceans. The simulated TC activity exhibits realistic geographical distribution and interannual variability, thus indicating that the model is able to reproduce the major basic mechanisms that link the TCs’ occurrence with the large-scale circulation. TC count anomalies prediction has been found to be sensitive to the subsurface assimilation in the ocean for initialization. Comparing the results with control simulations performed without assimilated initial conditions, the results indicate that the assimilation significantly improves the prediction of the TC count anomalies over the eastern North Pacific Ocean (ENP) and northern Indian Ocean (NI) during boreal summer. During the austral counterpart, significant progresses over the area surrounding Australia (AUS) and in terms of the probabilistic quality of the predictions also over the southern Indian Ocean (SI) were evidenced. The analysis shows that the improvement in the prediction of anomalous TC counts follows the enhancement in forecasting daily anomalies in sea surface temperature due to subsurface ocean initialization. Furthermore, the skill changes appear to be in part related to forecast differences in convective available potential energy (CAPE) over the ENP and the North Atlantic Ocean (ATL), in wind shear over the NI, and in both CAPE and wind shear over the SI.


Author(s):  
Lianli Gao ◽  
Pengpeng Zeng ◽  
Jingkuan Song ◽  
Yuan-Fang Li ◽  
Wu Liu ◽  
...  

To date, visual question answering (VQA) (i.e., image QA and video QA) is still a holy grail in vision and language understanding, especially for video QA. Compared with image QA that focuses primarily on understanding the associations between image region-level details and corresponding questions, video QA requires a model to jointly reason across both spatial and long-range temporal structures of a video as well as text to provide an accurate answer. In this paper, we specifically tackle the problem of video QA by proposing a Structured Two-stream Attention network, namely STA, to answer a free-form or open-ended natural language question about the content of a given video. First, we infer rich longrange temporal structures in videos using our structured segment component and encode text features. Then, our structured two-stream attention component simultaneously localizes important visual instance, reduces the influence of background video and focuses on the relevant text. Finally, the structured two-stream fusion component incorporates different segments of query and video aware context representation and infers the answers. Experiments on the large-scale video QA dataset TGIF-QA show that our proposed method significantly surpasses the best counterpart (i.e., with one representation for the video input) by 13.0%, 13.5%, 11.0% and 0.3 for Action, Trans., TrameQA and Count tasks. It also outperforms the best competitor (i.e., with two representations) on the Action, Trans., TrameQA tasks by 4.1%, 4.7%, and 5.1%.


2016 ◽  
Vol 29 (14) ◽  
pp. 5281-5297 ◽  
Author(s):  
Who M. Kim ◽  
Stephen Yeager ◽  
Ping Chang ◽  
Gokhan Danabasoglu

Abstract Deep convection in the Labrador Sea (LS) resumed in the winter of 2007/08 under a moderately positive North Atlantic Oscillation (NAO) state. This is in sharp contrast with the previous winter with weak convection, despite a similar positive NAO state. This disparity is explored here by analyzing reanalysis data and forced-ocean simulations. It is found that the difference in deep convection is primarily due to differences in large-scale atmospheric conditions that are not accounted for by the conventional NAO definition. Specifically, the 2007/08 winter was characterized by an atmospheric circulation anomaly centered in the western North Atlantic, rather than the eastern North Atlantic that the conventional NAO emphasizes. This anomalous circulation was also accompanied by anomalously cold conditions over northern North America. The controlling influence of these atmospheric conditions on LS deep convection in the 2008 winter is confirmed by sensitivity experiments where surface forcing and/or initial conditions are modified. An extended analysis for the 1949–2009 period shows that about half of the winters with strong heat losses in the LS are associated with such a west-centered circulation anomaly and cold conditions over northern North America. These are found to be accompanied by La Niña–like conditions in the tropical Pacific, suggesting that the atmospheric response to La Niña may have a strong influence on LS deep convection.


Sign in / Sign up

Export Citation Format

Share Document