scholarly journals NeuroSim Simulator for Compute-in-Memory Hardware Accelerator: Validation and Benchmark

2021 ◽  
Vol 4 ◽  
Author(s):  
Anni Lu ◽  
Xiaochen Peng ◽  
Wantong Li ◽  
Hongwu Jiang ◽  
Shimeng Yu

Compute-in-memory (CIM) is an attractive solution to process the extensive workloads of multiply-and-accumulate (MAC) operations in deep neural network (DNN) hardware accelerators. A simulator with options of various mainstream and emerging memory technologies, architectures, and networks can be a great convenience for fast early-stage design space exploration of CIM hardware accelerators. DNN+NeuroSim is an integrated benchmark framework supporting flexible and hierarchical CIM array design options from a device level, to a circuit level and up to an algorithm level. In this study, we validate and calibrate the prediction of NeuroSim against a 40-nm RRAM-based CIM macro post-layout simulations. First, the parameters of a memory device and CMOS transistor are extracted from the foundry’s process design kit (PDK) and employed in the NeuroSim settings; the peripheral modules and operating dataflow are also configured to be the same as the actual chip implementation. Next, the area, critical path, and energy consumption values from the SPICE simulations at the module level are compared with those from NeuroSim. Some adjustment factors are introduced to account for transistor sizing and wiring area in the layout, gate switching activity, post-layout performance drop, etc. We show that the prediction from NeuroSim is precise with chip-level error under 1% after the calibration. Finally, the system-level performance benchmark is conducted with various device technologies and compared with the results before the validation. The general conclusions stay the same after the validation, but the performance degrades slightly due to the post-layout calibration.

2015 ◽  
Vol 138 (1) ◽  
Author(s):  
Jesse Austin-Breneman ◽  
Bo Yang Yu ◽  
Maria C. Yang

During the early stage design of large-scale engineering systems, design teams are challenged to balance a complex set of considerations. The established structured approaches for optimizing complex system designs offer strategies for achieving optimal solutions, but in practice suboptimal system-level results are often reached due to factors such as satisficing, ill-defined problems, or other project constraints. Twelve subsystem and system-level practitioners at a large aerospace organization were interviewed to understand the ways in which they integrate subsystems in their own work. Responses showed subsystem team members often presented conservative, worst-case scenarios to other subsystems when negotiating a tradeoff as a way of hedging against their own future needs. This practice of biased information passing, referred to informally by the practitioners as adding “margins,” is modeled in this paper with a series of optimization simulations. Three “bias” conditions were tested: no bias, a constant bias, and a bias which decreases with time. Results from the simulations show that biased information passing negatively affects both the number of iterations needed and the Pareto optimality of system-level solutions. Results are also compared to the interview responses and highlight several themes with respect to complex system design practice.


Author(s):  
Jesse Austin-Breneman ◽  
Bo Yang Yu ◽  
Maria C. Yang

The early stage design of large-scale engineering systems challenges design teams to balance a complex set of considerations. Established structured approaches for optimizing complex system designs offer strategies for achieving optimal solutions, but in practice sub-optimal system-level results are often reached due to factors such as satisficing, ill-defined problems or other project constraints. Twelve sub-system and system-level practitioners at a large aerospace organization were interviewed to understand the ways in which they integrate sub-systems. Responses showed sub-system team members often presented conservative, worst-case scenarios to other sub-systems when negotiating a trade-off as a way of hedging their own future needs. This practice of biased information passing, referred to informally by the practitioners as adding “margins,” is modeled with a series of optimization simulations. Three “bias” conditions were tested: no bias, a constant bias and a bias which decreases with time. Results from the simulations show that biased information passing negatively affects both the number of iterations needed to reach and the Pareto optimality of system-level solutions. Results are also compared to the interview responses and highlight several themes with respect to complex system design practice.


Author(s):  
N. Ashwin Bharadwaj ◽  
James T. Allison ◽  
Randy H. Ewoldt

Rheological material properties are high-dimensional function-valued quantities, such as frequency-dependent viscoelastic moduli or non-Newtonian shear viscosity. Here we describe a process to model and optimize design targets for such rheological material functions. For linear viscoelastic systems, we demonstrate that one can avoid specific a priori assumptions of spring-dashpot topology by writing governing equations in terms of a time-dependent relaxation modulus function. Our approach embraces rheological design freedom, connecting system-level performance to optimal material functions that transcend specific material classes or structure. This technique is therefore material agnostic, applying to any material class including polymers, colloids, metals, composites, or any rheologically complex material. These early-stage design targets allow for broadly creative ideation of possible material solutions, which can then be used for either material-specific selection or later-stage design of novel materials.


2020 ◽  
Vol 142 (12) ◽  
Author(s):  
Priya P. Pillai ◽  
Edward Burnell ◽  
Xiqing Wang ◽  
Maria C. Yang

Abstract Engineers design for an inherently uncertain world. In the early stages of design processes, they commonly account for such uncertainty either by manually choosing a specific worst-case and multiplying uncertain parameters with safety factors or by using Monte Carlo simulations to estimate the probabilistic boundaries in which their design is feasible. The safety factors of this first practice are determined by industry and organizational standards, providing a limited account of uncertainty; the second practice is time intensive, requiring the development of separate testing infrastructure. In theory, robust optimization provides an alternative, allowing set-based conceptualizations of uncertainty to be represented during model development as optimizable design parameters. How these theoretical benefits translate to design practice has not previously been studied. In this work, we analyzed the present use of geometric programs as design models in the aerospace industry to determine the current state-of-the-art, then conducted a human-subjects experiment to investigate how various mathematical representations of uncertainty affect design space exploration. We found that robust optimization led to far more efficient explorations of possible designs with only small differences in an experimental participant’s understanding of their model. Specifically, the Pareto frontier of a typical participant using robust optimization left less performance “on the table” across various levels of risk than the very best frontiers of participants using industry-standard practices.


2021 ◽  
Vol 1 ◽  
pp. 1163-1172
Author(s):  
Rachel Meredith Moore ◽  
Anna-Maria Rivas McGowan ◽  
Nathaneal Jeyachandran ◽  
Kathleen H. Bond ◽  
Daniel Williams ◽  
...  

AbstractThe earliest stage in the innovation lifecycle, problem formulation, is crucial for setting direction in an innovation effort. When faced with an interesting problem, engineers commonly assume the approximate solution area and focus on ideating innovative solutions. However, in this project, NASA and their contracted partner, Accenture, collaboratively conducted problem discovery to ensure that solutioning efforts were focused on the right problems, for the right users, and addressing the most critical needs—in this case, exploring weather tolerant operations (WTO) to further urban air mobility (UAM) – known as UAM WTO. The project team leveraged generative, qualitative methods to understand the ecosystem, players, and where challenges in the industry are inhibiting development. The complexity of the problem area required that the team constantly observe and iterate on problem discovery, effectively “designing the design process.” This paper discusses the approach, methodologies, and selected results, including significant insights on the application of early-stage design methodologies to a complex, system-level problem.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 231
Author(s):  
Chester Sungchung Park ◽  
Sunwoo Kim ◽  
Jooho Wang ◽  
Sungkyung Park

A digital front-end decimation chain based on both Farrow interpolator for fractional sample-rate conversion and a digital mixer is proposed in order to comply with the long-term evolution standards in radio receivers with ten frequency modes. Design requirement specifications with adjacent channel selectivity, inband blockers, and narrowband blockers are all satisfied so that the proposed digital front-end is 3GPP-compliant. Furthermore, the proposed digital front-end addresses carrier aggregation in the standards via appropriate frequency translations. The digital front-end has a cascaded integrator comb filter prior to Farrow interpolator and also has a per-carrier carrier aggregation filter and channel selection filter following the digital mixer. A Farrow interpolator with an integrate-and-dump circuitry controlled by a condition signal is proposed and also a digital mixer with periodic reset to prevent phase error accumulation is proposed. From the standpoint of design methodology, three models are all developed for the overall digital front-end, namely, functional models, cycle-accurate models, and bit-accurate models. Performance is verified by means of the cycle-accurate model and subsequently, by means of a special C++ class, the bitwidths are minimized in a methodic manner for area minimization. For system-level performance verification, the orthogonal frequency division multiplexing receiver is also modeled. The critical path delay of each building block is analyzed and the spectral-domain view is obtained for each building block of the digital front-end circuitry. The proposed digital front-end circuitry is simulated, designed, and both synthesized in a 180 nm CMOS application-specific integrated circuit technology and implemented in the Xilinx XC6VLX550T field-programmable gate array (Xilinx, San Jose, CA, USA).


Sign in / Sign up

Export Citation Format

Share Document