A Class of Fast and Accurate Multi-layer Block Summation and Dot Product Algorithms

Author(s):  
Kang He ◽  
Roberto Barrio ◽  
Lin Chen ◽  
Hao Jiang ◽  
Jie Liu ◽  
...  
Keyword(s):  
Author(s):  
J. J. Hren ◽  
W. D. Cooper ◽  
L. J. Sykes

Small dislocation loops observed by transmission electron microscopy exhibit a characteristic black-white strain contrast when observed under dynamical imaging conditions. In many cases, the topography and orientation of the image may be used to determine the nature of the loop crystallography. Two distinct but somewhat overlapping procedures have been developed for the contrast analysis and identification of small dislocation loops. One group of investigators has emphasized the use of the topography of the image as the principle tool for analysis. The major premise of this method is that the characteristic details of the image topography are dependent only on the magnitude of the dot product between the loop Burgers vector and the diffracting vector. This technique is commonly referred to as the (g•b) analysis. A second group of investigators has emphasized the use of the orientation of the direction of black-white contrast as the primary means of analysis.


2012 ◽  
Vol 47 (3) ◽  
pp. 548-568 ◽  
Author(s):  
Ross J. Kang ◽  
Tobias Müller
Keyword(s):  

2013 ◽  
Vol 39 (2) ◽  
pp. 410-419 ◽  
Author(s):  
C. Efstathiou ◽  
N. Moschopoulos ◽  
I. Voyiatzis ◽  
K. Pekmestzi
Keyword(s):  

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Yarib Nevarez ◽  
David Rotermund ◽  
Klaus R. Pawelzik ◽  
Alberto Garcia-Ortiz

2020 ◽  
Vol 29 (01n04) ◽  
pp. 2040007
Author(s):  
Yang Zhao ◽  
Fengyu Qian ◽  
Faquir Jain ◽  
Lei Wang

In-memory computing is an emerging technique to fulfill the fast growing demand for high-performance data processing. This technique provides fast processing and high throughput by accessing data stored in the memory array rather than dealing with complicated operation and data movement on hard drive. For data processing, the most important computation is dot product, which is also the core computation for applications such as deep learning neuron networks, machine learning, etc. As multiplication is the key function in dot product, it is critical to improve its performance and achieve faster memory processing. In this paper, we present a design with the ability to perform in-memory multi-bit multiplications. The proposed design is implemented by using quantum-dot transistors, which enable multi-bit computations in the memory cell. Experimental results demonstrate that the proposed design provides reliable in-memory multi-bit multiplications with high density and high energy efficiency. Statistical analysis is performed using Monte Carlo simulations to investigate the process variations and error effects.


Sign in / Sign up

Export Citation Format

Share Document