system memory
Recently Published Documents


TOTAL DOCUMENTS

88
(FIVE YEARS 34)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Haonan Jin ◽  
Lesheng He ◽  
Liang Dong ◽  
Yongliang Tan ◽  
Qingyang Kong

The drastic changes in the solar wind will cause serious harm to human life. Monitoring interplanetary scintillation (IPS) can predict solar wind activity, thereby effectively reducing the harm caused by space weather. Aiming at the problem of the lack of the ability to observe IPS phenomenon of the 40-meter radio telescope at the Yunnan Astronomical Observatory of China in the frequency band around 300MHz, an IPS real-time acquisition and processing scheme based on all programmable system-on-chip(APSoC) was proposed. The system calculates the average power of 10ms IPS signal in PL-side and transmits it to the system memory through AXI4 bus. PS-side reads the data, takes logarithms, packages it, and finally transmits it to the LabVIEW host computer through gigabit Ethernet UDP mode for display and storage. Experimental tests show that the system functions correctly, and the PL-side power consumption is only 1.955 W, with a high time resolution of 10ms, and no data is lost in 24 hours of continuous observation, with good stability. The system has certain application value in IPS observation.


2021 ◽  
Vol 2074 (1) ◽  
pp. 012009
Author(s):  
Yanjing Cai

Abstract Differentiated service for packets entering the network is available through packet matching. Network security and differentiated services mean an inevitable choice for routers. Recursive data flow matching algorithm (RFC) is a high performance packet matching algorithm. However, with the increase of rule dimension and scale in the rule base, system memory consumption is unavoidable. This paper lowers memory consumption via improvement on RFC by dividing the rule base into several subsets and storing each rule in a separate subset. In addition, a variety of methods are used to streamline the RFC data structure for further improvement in algorithm speed and memory performance. The experimental results show that the improved algorithm of RFC greatly reduces the overall memory consumption of RFC, while greatly improving package matching performance.


2021 ◽  
Author(s):  
Mark Khait ◽  
Denis Voskov

Abstract Alternative to CPU computing architectures, such as GPU, continue to evolve increasing the gap in peak memory bandwidth achievable on a conventional workstation or laptop. Such architectures are attractive for reservoir simulation, which performance is generally bounded by system memory bandwidth. However, to harvest the benefit of a new architecture, the source code has to be inevitably rewritten, sometimes almost completely. One of the biggest challenges here is to refactor the Jacobian assembly which typically involves large volumes of code and complex data processing. We demonstrate an effective and general way to simplify the linearization stage extracting complex physics-related computations from the main simulation loop and leaving only an algebraic multi-linear interpolation kernel instead. In this work, we provide the detailed description of simulation performance benefits from execution of the entire nonlinear loop on the GPU platform. We evaluate the computational performance of Delft Advanced Research Terra Simulator (DARTS) for various subsurface applications of practical interest on both CPU and GPU platforms, comparing particular workflow phases including Jacobian assembly and linear system solution with both stages of the Constraint Pressure Residual preconditioner.


Author(s):  
David Broneske ◽  
Anna Drewes ◽  
Bala Gurumurthy ◽  
Imad Hajjar ◽  
Thilo Pionteck ◽  
...  

AbstractClassical database systems are now facing the challenge of processing high-volume data feeds at unprecedented rates as efficiently as possible while also minimizing power consumption. Since CPU-only machines hit their limits, co-processors like GPUs and FPGAs are investigated by database system designers for their distinct capabilities. As a result, database systems over heterogeneous processing architectures are on the rise. In order to better understand their potentials and limitations, in-depth performance analyses are vital. This paper provides interesting performance data by benchmarking a portable operator set for column-based systems on CPU, GPU, and FPGA – all available processing devices within the same system. We consider TPC‑H query Q6 and additionally a hash join to profile the execution across the systems. We show that system memory access and/or buffer management remains the main bottleneck for device integration, and that architecture-specific execution engines and operators offer significantly higher performance.


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254313
Author(s):  
Ramalingam Shanmugam ◽  
Gerald Ledlow ◽  
Karan P. Singh

We present a restricted infection rate inverse binomial-based approach to better predict COVID-19 cases after a family gathering. The traditional inverse binomial (IB) model is inappropriate to match the reality of COVID-19, because the collected data contradicts the model’s requirement that variance should be larger than the expected value. Our version of an IB model is more appropriate, as it can accommodate all potential data scenarios in which the variance is smaller, equal, or larger than the mean. This is unlike the usual IB, which accommodates only the scenario in which the variance is more than the mean. Therefore, we propose a refined version of an IB model to be able to accommodate all potential data scenarios. The application of the approach is based on a restricted infectivity rate and methodology on COVID-19 data, which exhibit two clusters of infectivity. Cluster 1 has a smaller number of primary cases and exhibits larger variance than the expected cases with a negative correlation of 28%, implying that the number of secondary cases is lesser when the number of primary cases increases and vice versa. The traditional IB model is appropriate for Cluster 1. The probability of contracting COVID-19 is estimated to be 0.13 among the primary, but is 0.75 among the secondary in Cluster 1, with a wider gap. Cluster 2, with a larger number of primary cases, exhibits smaller variance than the expected cases with a correlation of 79%, implying that the number of primary and secondary cases do increase or decrease together. Cluster 2 disqualifies the traditional IB model and requires its refined version. The probability of contracting COVID-19 is estimated to be 0.74 among the primary, but is 0.72 among the secondary in Cluster 2, with a narrower gap. The advantages of the proposed approach include the model’s ability to estimate the community’s health system memory, as future policies might reduce COVID’s spread. In our approach, the current hazard level to be infected with COVID-19 and the odds of not contracting COVID-19 among the primary in comparison to the secondary groups are estimable and interpretable.


2021 ◽  
Vol 11 (2) ◽  
pp. 24
Author(s):  
Mirco De Marchi ◽  
Francesco Lumpp ◽  
Enrico Martini ◽  
Michele Boldo ◽  
Stefano Aldegheri ◽  
...  

Many modern programmable embedded devices contain CPUs and a GPU that share the same system memory on a single die. Such a unified memory architecture (UMA) allows programmers to implement different communication models between CPU and the integrated GPU (iGPU). Although the simpler model guarantees implicit synchronization at the cost of performance, the more advanced model allows, through the zero-copy paradigm, the explicit data copying between CPU and iGPU to be eliminated with the benefit of significantly improving performance and energy savings. On the other hand, the robot operating system (ROS) has become a de-facto reference standard for developing robotic applications. It allows for application re-use and the easy integration of software blocks in complex cyber-physical systems. Although ROS compliance is strongly required for SW portability and reuse, it can lead to performance loss and elude the benefits of the zero-copy communication. In this article we present efficient techniques to implement CPU–iGPU communication by guaranteeing compliance to the ROS standard. We show how key features of each communication model are maintained and the corresponding overhead involved by the ROS compliancy.


2021 ◽  
Vol 118 (16) ◽  
pp. e2015188118
Author(s):  
Mari Kawakatsu ◽  
Philip S. Chodrow ◽  
Nicole Eikmeier ◽  
Daniel B. Larremore

Many social and biological systems are characterized by enduring hierarchies, including those organized around prestige in academia, dominance in animal groups, and desirability in online dating. Despite their ubiquity, the general mechanisms that explain the creation and endurance of such hierarchies are not well understood. We introduce a generative model for the dynamics of hierarchies using time-varying networks, in which new links are formed based on the preferences of nodes in the current network and old links are forgotten over time. The model produces a range of hierarchical structures, ranging from egalitarianism to bistable hierarchies, and we derive critical points that separate these regimes in the limit of long system memory. Importantly, our model supports statistical inference, allowing for a principled comparison of generative mechanisms using data. We apply the model to study hierarchical structures in empirical data on hiring patterns among mathematicians, dominance relations among parakeets, and friendships among members of a fraternity, observing several persistent patterns as well as interpretable differences in the generative mechanisms favored by each. Our work contributes to the growing literature on statistically grounded models of time-varying networks.


Author(s):  
Dmitry Vladimirovich Rakhinsky ◽  
Vladimir Viktorovich Lunev ◽  
Tatyana Anatolevna Luneva ◽  
Evgenii Stepanovich Shcheblyakov

  The object of this research is the process of self-organization of students of the higher school. The subject is the principles of planning the educational process in the higher school in the conditions of self-organization of students on the basis of synergetic approach. The goal consists in theoretical substantiation of the model of educational process in the higher school in accordance with pedagogical synergetics. Research methodology is the pedagogical synergetics. Synergetic approach allows integrating the experience accumulated in pedagogical science and creating the model of educational process in the conditions of information society and the self-organizing learning environment. The authors examine the two approaches towards self-organization of students: personal and collective. The principles of planning the educational process in the conditions of self-study of students and rich information environment based on synergetic approach are proposed. The conclusion is made that synergetics can serve as a methodological framework for studying the phenomena of self-organization in the learning process of students in the higher school. The two forms of self-organization are determined: coherent (from homogeneous elements) and continual (from heterogeneous elements). It is demonstrated that progressive self-organization in pedagogical systems can be only of continual type. The article offers the following principles of planning the educational process by the type of continual self-organization: the principle of diversity at the entry to system, the principle of continuous interaction and openness of the system, the principle of nonlinearity of development, and the principle of system memory. The novelty of this work consists in formulation of the principles of pedagogical synergetics on the methodological level for planning the educational process of the students of higher school. The authors' special contribution lies in substantiation of the role of diversity and memory in the context pf self-organization in open pedagogical systems.  


2021 ◽  
Vol 20 ◽  
pp. 160940692110384
Author(s):  
Amaris Dalton ◽  
Karin Wolff ◽  
Bernard Bekker

Collaborative research has become increasingly prominent since the mid-20th century. This article aspires to offer a fundamental ontology of a multidisciplinary research system. As a point of departure, we consider disciplinarity as a restricted language code as noted by Bernstein. The impetus for collaboration is found in a research problem’s transcendence of disciplinary bounds. This article makes several propositions that diverge from the consensus position regarding the formation and dynamics of a multidisciplinary system. Most notably that such a system adheres to the constituent elements of what could be regarded as a complex system, including an ensemble of elements, interactions between these elements, local disorder followed by the emergence of robust order and system memory. We propose that the internal communications and subsequent self-organization of such a system may be conceptualized as orientation signals, or ‘stigmergy’, analogous to those observed in swarms. System robustness, we argue, is a function of the individual researcher’s local autonomy and is, paradoxically, augmented by the weakness of communications across disciplinary bounds, along with the lack of central organization and the emphasis on research novelty. System memory, we argue, manifests itself in the ability of a researcher to change her/his route of inquiry, based on environmental feedback, whereby new information becomes incorporated into the adjusted research methodology. We propose that an emergent intelligence, at the level of the system, expresses itself in the unconcealment of the ‘form’ of the metaproblem. The theoretical model is empirically illustrated using, as an example, the contemporary field of renewable energy research, which is an area primed for collaborative research. It is anticipated that an improved understanding of multidisciplinary research systems provides insights into certain strengths particular to less integrated and self-organized forms of collaborative research along with a framework with which to improve the design and fostering of such systems.


2021 ◽  
Vol 251 ◽  
pp. 04006
Author(s):  
Alexander Paramonov

The Front-End Link eXchange (FELIX) system is an interface between the trigger and detector electronics and commodity switched networks for the ATLAS experiment at CERN. In preparation for the LHC Run 3, to start in 2022, the system is being installed to read out the new electromagnetic calorimeter, calorimeter trigger, and muon components being installed as part of the ongoing ATLAS upgrade programme. The detector and trigger electronic systems are largely custom and fully synchronous with respect to the 40.08 MHz clock of the Large Hadron Collider (LHC). The FELIX system uses FPGAs on server-hosted PCIe boards to pass data between custom data links connected to the detector and trigger electronics and host system memory over a PCIe interface then route data to network clients, such as the Software Readout Drivers (SW ROD), via a dedicated software platform running on these machines. The SW RODs build event fragments, buffer data, perform detector-specific processing and provide data for the ATLAS High Level Trigger. The FELIX approach takes advantage of modern FPGAs and commodity computing to reduce the system complexity and effort needed to support data acquisition systems in comparison to previous designs. Future upgrades of the experiment will introduce FELIX to read out all other detector components.


Sign in / Sign up

Export Citation Format

Share Document