Compute-in-Memory Architecture for Data-Intensive Kernels

Author(s):  
Robert Karam ◽  
Somnath Paul ◽  
Swarup Bhunia
Author(s):  
Said Hamdioui ◽  
Lei Xie ◽  
Anh Nguyen Hai Anh ◽  
Mottaqiallah Taouil ◽  
Koen Bertels ◽  
...  

2020 ◽  
Author(s):  
Dominique Lavenier ◽  
Remy Cimadomo ◽  
Romaric Jodin

AbstractIn this paper, we introduce a new combination of software and hardware PIM (Process-in-Memory) architecture to accelerate the variant calling genomic process. PIM translates into bringing data intensive calculations directly where the data is: within the DRAM, enhanced with thousands of processing units. The energy consumption, in large part due to data movement, is significantly lowered at a marginal additional hardware cost. Such design allows an unprecedented level of parallelism to process billions of short reads. Experiments on real PIM devices developed by the UPMEM company show significant speed-up compared to pure software implementation. The PIM solution also compared nicely to FPGA or GPU based acceleration bringing similar to twice the processing speed but most importantly being 5 to 8 times cheaper to deploy with up to 6 times less power consumption.


2007 ◽  
Vol 2 (1) ◽  
Author(s):  
M. Hochedlinger ◽  
W. Sprung ◽  
H. Kainz ◽  
K. König

The simulation of combined sewer overflow volumes and loads is important for the assessment of the overflow and overflow load to the receiving water to predict the hydraulic or the pollution impact. Hydrodynamic models are very data-intensive and time-consuming for long-term quality modelling. Hence, for long-term modelling, hydrological models are used to predict the storm flow in a fast way. However, in most cases, a constant rain intensity is used as load for the simulation, but in practice even for small catchments rain occurs in rain cells, which are not constant over the whole catchment area. This paper presents the results of quality modelling considering moving storms depending on the rain cell velocity and its moving direction. Additionally, tipping bucket gauge failures and different corrections are also taken into account. The results evidence the importance of these considerations for precipitation due the effects on overflow load and show the difference up to 28% of corrected and uncorrected data and of moving rain cells instead of constant raining intensities.


Author(s):  
Simab Hasan Rizvi

In Today's age of Tetra Scale computing, the application has become more data intensive than ever. The increased data volume from applications, in now tackling larger and larger problems, and has fuelled the need for efficient management of this data. In this paper, a technique called Content Addressable Storage or CAS, for managing large volume of data is evaluated. This evaluation focuses on the benefits and demerits of using CAS it focuses, i) improved application performance via lockless and lightweight synchronization ofaccess to shared storage data, ii) improved cache performance, iii) increase in storage capacity and, iv) increase network bandwidth. The presented design of a CAS-Based file store significantly improves the storage performance that provides lightweight lock less user defined consistency semantics. As a result, this file system shows a 28% increase in read bandwidth and 13% increase in write bandwidth, over a popular file system in common use. In this paper the potential benefits of using CAS for a virtual machine are estimated. The study also explains mobility application for active use and public deployment.


Sign in / Sign up

Export Citation Format

Share Document