scholarly journals LargeGraph

2021 ◽  
Vol 18 (4) ◽  
pp. 1-24
Author(s):  
Yu Zhang ◽  
Da Peng ◽  
Xiaofei Liao ◽  
Hai Jin ◽  
Haikun Liu ◽  
...  

Many out-of-GPU-memory systems are recently designed to support iterative processing of large-scale graphs. However, these systems still suffer from long time to converge because of inefficient propagation of active vertices’ new states along graph paths. To efficiently support out-of-GPU-memory graph processing, this work designs a system LargeGraph . Different from existing out-of-GPU-memory systems, LargeGraph proposes a dependency-aware data-driven execution approach , which can significantly accelerate active vertices’ state propagations along graph paths with low data access cost and also high parallelism. Specifically, according to the dependencies between the vertices, it only loads and processes the graph data associated with dependency chains originated from active vertices for smaller access cost. Because most active vertices frequently use a small evolving set of paths for their new states’ propagation because of power-law property, this small set of paths are dynamically identified and maintained and efficiently handled on the GPU to accelerate most propagations for faster convergence, whereas the remaining graph data are handled over the CPU. For out-of-GPU-memory graph processing, LargeGraph outperforms four cutting-edge systems: Totem (5.19–11.62×), Graphie (3.02–9.41×), Garaph (2.75–8.36×), and Subway (2.45–4.15×).

2020 ◽  
Vol 10 (2) ◽  
pp. 103-106
Author(s):  
ASTEMIR ZHURTOV ◽  

Cruel and inhumane acts that harm human life and health, as well as humiliate the dignity, are prohibited in most countries of the world, and Russia is no exception in this issue. The article presents an analysis of the institution of responsibility for torture in the Russian Federation. The author comes to the conclusion that the current criminal law of Russia superficially and fragmentally regulates liability for torture, in connection with which the author formulated the proposals to define such act as an independent crime. In the frame of modern globalization, the world community pays special attention to the protection of human rights, in connection with which large-scale international standards have been created a long time ago. The Universal Declaration of Human Rights and other international acts enshrine prohibitions of cruel and inhumane acts that harm human life and health, as well as degrade the dignity.Considering the historical experience of the past, these standards focus on the prohibition of any kind of torture, regardless of the purpose of their implementation.


2021 ◽  
Vol 56 (1) ◽  
pp. 112-130 ◽  
Author(s):  
Haifeng Huang

AbstractFor a long time, since China’s opening to the outside world in the late 1970s, admiration for foreign socioeconomic prosperity and quality of life characterized much of the Chinese society, which contributed to dissatisfaction with the country’s development and government and a large-scale exodus of students and emigrants to foreign countries. More recently, however, overestimating China’s standing and popularity in the world has become a more conspicuous feature of Chinese public opinion and the social backdrop of the country’s overreach in global affairs in the last few years. This essay discusses the effects of these misperceptions about the world, their potential sources, and the outcomes of correcting misperceptions. It concludes that while the world should get China right and not misinterpret China’s intentions and actions, China should also get the world right and have a more balanced understanding of its relationship with the world.


2021 ◽  
pp. 5-20
Author(s):  
M. V. Ershov

The global economy continues to grow, albeit mainly due to large-scale support measures from governments and regulators. Moreover, the latter are not sure about the prospects for such development, since the economies do not demonstrate the potential for independent growth. As a result, in order to stimulate it, regulators are forced to expand the range of their tools, mechanisms, approaches, otherwise the risks to the stability of the global financial and economic system increase. All this is happening against the background of negative rates, which have become virtually ubiquitous and persist for a long time. New growth records are being set in the stock markets, and their gap from the real economy is growing. A number of sectors are beginning to dominate, forming distortions and bubbles in the markets. In such conditions, the importance of digital money, ecosystems, etc. increases. Moreover, the faster and more efficiently regulators can integrate into these formats, the more successful business, the population, and the economy as a whole will be.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Yiwen Zhang ◽  
Yuanyuan Zhou ◽  
Xing Guo ◽  
Jintao Wu ◽  
Qiang He ◽  
...  

The K-means algorithm is one of the ten classic algorithms in the area of data mining and has been studied by researchers in numerous fields for a long time. However, the value of the clustering number k in the K-means algorithm is not always easy to be determined, and the selection of the initial centers is vulnerable to outliers. This paper proposes an improved K-means clustering algorithm called the covering K-means algorithm (C-K-means). The C-K-means algorithm can not only acquire efficient and accurate clustering results but also self-adaptively provide a reasonable numbers of clusters based on the data features. It includes two phases: the initialization of the covering algorithm (CA) and the Lloyd iteration of the K-means. The first phase executes the CA. CA self-organizes and recognizes the number of clusters k based on the similarities in the data, and it requires neither the number of clusters to be prespecified nor the initial centers to be manually selected. Therefore, it has a “blind” feature, that is, k is not preselected. The second phase performs the Lloyd iteration based on the results of the first phase. The C-K-means algorithm combines the advantages of CA and K-means. Experiments are carried out on the Spark platform, and the results verify the good scalability of the C-K-means algorithm. This algorithm can effectively solve the problem of large-scale data clustering. Extensive experiments on real data sets show that the accuracy and efficiency of the C-K-means algorithm outperforms the existing algorithms under both sequential and parallel conditions.


2022 ◽  
Vol 15 (2) ◽  
pp. 1-33
Author(s):  
Mikhail Asiatici ◽  
Paolo Ienne

Applications such as large-scale sparse linear algebra and graph analytics are challenging to accelerate on FPGAs due to the short irregular memory accesses, resulting in low cache hit rates. Nonblocking caches reduce the bandwidth required by misses by requesting each cache line only once, even when there are multiple misses corresponding to it. However, such reuse mechanism is traditionally implemented using an associative lookup. This limits the number of misses that are considered for reuse to a few tens, at most. In this article, we present an efficient pipeline that can process and store thousands of outstanding misses in cuckoo hash tables in on-chip SRAM with minimal stalls. This brings the same bandwidth advantage as a larger cache for a fraction of the area budget, because outstanding misses do not need a data array, which can significantly speed up irregular memory-bound latency-insensitive applications. In addition, we extend nonblocking caches to generate variable-length bursts to memory, which increases the bandwidth delivered by DRAMs and their controllers. The resulting miss-optimized memory system provides up to 25% speedup with 24× area reduction on 15 large sparse matrix-vector multiplication benchmarks evaluated on an embedded and a datacenter FPGA system.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Matthew Joseph ◽  
Aaron Roth ◽  
Jonathan Ullman ◽  
Bo Waggoner

There are now several large scale deployments of differential privacy used to collect statistical information about users. However, these deployments periodically recollect the data and recompute the statistics using algorithms designed for a single use. As a result, these systems do not provide meaningful privacy guarantees over long time scales. Moreover, existing techniques to mitigate this effect do not apply in the “local model” of differential privacy that these systems use. In this paper, we introduce a new technique for local differential privacy that makes it possible to maintain up-to-date statistics over time, with privacy guarantees that degrade only in the number of changes in the underlying distribution rather than the number of collection periods. We use our technique for tracking a changing statistic in the setting where users are partitioned into an unknown collection of groups, and at every time period each user draws a single bit from a common (but changing) group-specific distribution. We also provide an application to frequency and heavy-hitter estimation.


2016 ◽  
Author(s):  
Timothy N. Rubin ◽  
Oluwasanmi Koyejo ◽  
Krzysztof J. Gorgolewski ◽  
Michael N. Jones ◽  
Russell A. Poldrack ◽  
...  

AbstractA central goal of cognitive neuroscience is to decode human brain activity--i.e., to infer mental processes from observed patterns of whole-brain activation. Previous decoding efforts have focused on classifying brain activity into a small set of discrete cognitive states. To attain maximal utility, a decoding framework must be open-ended, systematic, and context-sensitive--i.e., capable of interpreting numerous brain states, presented in arbitrary combinations, in light of prior information. Here we take steps towards this objective by introducing a Bayesian decoding framework based on a novel topic model---Generalized Correspondence Latent Dirichlet Allocation---that learns latent topics from a database of over 11,000 published fMRI studies. The model produces highly interpretable, spatially-circumscribed topics that enable flexible decoding of whole-brain images. Importantly, the Bayesian nature of the model allows one to “seed” decoder priors with arbitrary images and text--enabling researchers, for the first time, to generative quantitative, context-sensitive interpretations of whole-brain patterns of brain activity.


Sign in / Sign up

Export Citation Format

Share Document