Policy Routing Using Process-Level Identifiers

Author(s):  
Oliver Michel ◽  
Eric Keller
Keyword(s):  
Author(s):  
Steven B. Herschbein ◽  
Hyoung H. Kang ◽  
Scott L. Jansen ◽  
Andrew S. Dalton

Abstract Test engineers and failure analyst familiar with random access memory arrays have probably encountered the frustration of dealing with address descrambling. The resulting nonsequential internal bit cell counting scheme often means that the location of the failing cell under investigation is nowhere near where it is expected to be. A logical to physical algorithm for decoding the standard library block might have been provided with the design, but is it still correct now that the array has been halved and inverted to fit the available space in a new processor chip? Off-line labs have traditionally been tasked with array layout verification. In the past, hard and soft failures could be induced on the frontside of finished product, then bitmapped to see if the sites were in agreement. As density tightened, flip-chip FIB techniques to induce a pattern of hard fails on packaged devices came into practice. While the backside FIB edit method is effective, it is complex and expensive. The installation of an in-line Dual Beam FIB created new opportunities to move FA tasks out of the lab and into the FAB. Using a new edit procedure, selected wafers have an extensive pattern of defects 'written' directly into the memory array at an early process level. Bitmapping of the RAM blocks upon wafer completion is then used to verify correlation between the physical damaged cells and the logical sites called out in the test results. This early feedback in-line methodology has worked so well that it has almost entirely displaced the complex laboratory procedure of backside FIB memory array descramble verification.


2020 ◽  
Vol 45 (3) ◽  
pp. 1069-1103
Author(s):  
Anton Braverman

This paper studies the steady-state properties of the join-the-shortest-queue model in the Halfin–Whitt regime. We focus on the process tracking the number of idle servers and the number of servers with nonempty buffers. Recently, Eschenfeldt and Gamarnik proved that a scaled version of this process converges, over finite time intervals, to a two-dimensional diffusion limit as the number of servers goes to infinity. In this paper, we prove that the diffusion limit is exponentially ergodic and that the diffusion scaled sequence of the steady-state number of idle servers and nonempty buffers is tight. Combined with the process-level convergence proved by Eschenfeldt and Gamarnik, our results imply convergence of steady-state distributions. The methodology used is the generator expansion framework based on Stein’s method, also referred to as the drift-based fluid limit Lyapunov function approach in Stolyar. One technical contribution to the framework is to show how it can be used as a general tool to establish exponential ergodicity.


2021 ◽  
pp. 109467052110124
Author(s):  
Sarah Köcher ◽  
Sören Köcher

In this article, the authors demonstrate a tendency among consumers to use the arithmetic mode as a heuristic basis when drawing inferences from graphical displays of online rating distributions in such a way that service evaluations inferred from rating distributions systematically vary by the location of the mode. The rationale underlying this phenomenon is that the mode (i.e., the most frequent rating which is represented by the tallest bar in a graphical display) attracts consumers’ attention because of its visual salience and is thus disproportionately weighted when they draw conclusions. Across a series of eight studies, the authors provide strong empirical evidence for the existence of the mode heuristic, shed light on this phenomenon at the process level, and demonstrate how consumers’ inferences based on the mode heuristic depend on the visual salience of the mode. Together, the findings of these studies contribute to a better understanding of how service customers process and interpret graphical illustrations of online rating distributions and provide companies with a new key figure that—aside from rating volume, average ratings, and rating dispersion—should be incorporated in the monitoring, analyzing, and evaluating of review data.


1998 ◽  
Vol 120 (1) ◽  
pp. 129-140 ◽  
Author(s):  
P. Sheng ◽  
E. Hertwich

With the expansion of pollution-prevention initiatives in the government sector, development of certification and eco-labeling mechanisms in foreign trade, and the emergence of “green” market drivers for consumer demand, industry is under increasing pressure to evaluate the “life-cycle” waste streams which emanate from their products and manufacturing processes. While much research has been devoted to the study of “system-level” design-for-environment (i.e. design for disassembly, serviceability, modularity), little attention has been given to the influence of planning and design decisions at the unit manufacturing process level, which has a significant impact on waste streams through material, catalyst, parameter and feature selection decisions. One of the most pressing issues in environmentally-conscious manufacturing is the ability to compare the environmental impacts of dissimilar waste streams to formulate the above decisions. This paper presents an overview of the hierarchical levels of comparative waste assessment which links process-level emissions to immediate, site-wide, and eco-system impacts. Significant issues to be addressed are: (1) the aggregation of data collection required for each level of decision-making, (2) the range of environmental effects needed to be analyzed at each level, (3) the uncertainty present at different levels of data aggregation, (4) the influence of site-specific (fate and transport) factors, and (5) the transformation of environmental information into metrics usable in detailed design and planning of products and processes. Case studies in the fabrication of metal parts and printed circuit boards are presented.


Sign in / Sign up

Export Citation Format

Share Document