Data Layout and Data Representation Optimizations to Reduce Data Movement Keynote

Author(s):  
Mary Hall
Author(s):  
James A. Anderson ◽  
Paul Allopenna ◽  
Gerald S. Guralnik ◽  
Daniel Ferrente ◽  
John A. Santini

The Ersatz Brain Project develops programming techniques and software applications for a brain-like computing system. Its brain-like hardware architecture design is based on a select set of ideas taken from the anatomy of mammalian neo-cortex. In common with other such attempts it is based on a massively parallel, two-dimensional array of CPUs and their associated memory. The design used in this project: 1) Uses an approximation to cortical computation called the network of networks which holds that the basic computing unit in the cortex is not a single neuron but groups of neurons working together in attractor networks; 2) Assumes connections and data representations in cortex are sparse; 3) Makes extensive use of local lateral connections and topographic data representations, and 4) Scales in a natural way from small groups of neurons to the entire cortical regions. The resulting system computes effectively using techniques such as local data movement, sparse data representation, sparse connectivity, temporal coincidence, and the formation of discrete “module assemblies.” The authors discuss recent neuroscience in relation to their physiological assumptions and a set of experiments displaying what appear to be “concept-like” ensemble based cells in human cortex.


2021 ◽  
Vol 14 (11) ◽  
pp. 2216-2229
Author(s):  
Subhadeep Sarkar ◽  
Dimitris Staratzis ◽  
Ziehen Zhu ◽  
Manos Athanassoulis

Log-structured merge (LSM) trees offer efficient ingestion by appending incoming data, and thus, are widely used as the storage layer of production NoSQL data stores. To enable competitive read performance, LSM-trees periodically re-organize data to form a tree with levels of exponentially increasing capacity, through iterative compactions. Compactions fundamentally influence the performance of an LSM-engine in terms of write amplification, write throughput, point and range lookup performance, space amplification, and delete performance. Hence, choosing the appropriate compaction strategy is crucial and, at the same time, hard as the LSM-compaction design space is vast, largely unexplored, and has not been formally defined in the literature. As a result, most LSM-based engines use a fixed compaction strategy, typically hand-picked by an engineer, which decides how and when to compact data. In this paper, we present the design space of LSM-compactions, and evaluate state-of-the-art compaction strategies with respect to key performance metrics. Toward this goal, our first contribution is to introduce a set of four design primitives that can formally define any compaction strategy: (i) the compaction trigger, (ii) the data layout, (iii) the compaction granularity, and (iv) the data movement policy. Together, these primitives can synthesize both existing and completely new compaction strategies. Our second contribution is to experimentally analyze 10 compaction strategies. We present 12 observations and 7 high-level takeaway messages, which show how LSM systems can navigate the compaction design space.


Sign in / Sign up

Export Citation Format

Share Document