New techniques to speed up voltage collapse computations using tangent vectors

1997 ◽  
Vol 12 (3) ◽  
pp. 1380-1387 ◽  
Author(s):  
A.C.Z. de Souza ◽  
C.A. Canizares ◽  
V.H. Quintana
2021 ◽  
Author(s):  
Jalal Mohammad Chikhe

Due to the reduction of transistor size, modern circuits are becoming more sensitive to soft errors. The development of new techniques and algorithms targeting soft error detection are important as they allow designers to evaluate the weaknesses of the circuits at an early stage of the design. The project presents an optimized implementation of soft error detection simulator targeting combinational circuits. The developed simulator uses advanced switch level models allowing the injection of soft errors caused by single event-transient pulses with magnitudes lesser than the logic threshold. The ISCAS'85 benchmark circuits are used for the simulations. The transients can be injected at drain, gate, or inputs of logic gate. This gives clear indication of the importance of transient injection location on the fault coverage. Furthermore, an algorithm is designed and implemented in this work to increase the performance of the simulator. This optimized version of the simulator achieved an average speed-up of 310 compared to the non-algorithm based version of the simulator.


Author(s):  
Marek Chrobak ◽  
Mordecai Golin ◽  
Tak-Wah Lam ◽  
Dorian Nogneng

AbstractWe consider scheduling problems for unit jobs with release times, where the number or size of the gaps in the schedule is taken into consideration, either in the objective function or as a constraint. Except for several papers on minimum-energy scheduling, there is no work in the scheduling literature that uses performance metrics depending on the gap structure of a schedule. One of our objectives is to initiate the study of such scheduling problems. We focus on the model with unit-length jobs. First we examine scheduling problems with deadlines, where we consider two variants of minimum-gap scheduling: maximizing throughput with a budget for the number of gaps and minimizing the number of gaps with a throughput requirement. We then turn to other objective functions. For example, in some scenarios gaps in a schedule may be actually desirable, leading to the problem of maximizing the number of gaps. A related problem involves minimizing the maximum gap size. The second part of the paper examines the model without deadlines, where we focus on the tradeoff between the number of gaps and the total or maximum flow time. For all these problems we provide polynomial time algorithms, with running times ranging from $$O(n\log n)$$ O ( n log n ) for some problems to $$O(n^7)$$ O ( n 7 ) for other. The solutions involve a spectrum of algorithmic techniques, including different dynamic programming formulations, speed-up techniques based on searching Monge arrays, searching $$X+Y$$ X + Y matrices, or implicit binary search. Throughout the paper, we also draw a connection between gap scheduling problems and their continuous analogues, namely hitting set problems for intervals of real numbers. As it turns out, for some problems the continuous variants provide insights leading to efficient algorithms for the corresponding discrete versions, while for other problems completely new techniques are needed to solve the discrete version.


2021 ◽  
Vol 55 (1) ◽  
pp. 1-10
Author(s):  
Kasra Jamshidi ◽  
Keval Vora

Graph mining workloads aim to extract structural properties of a graph by exploring its subgraph structures. PEREGRINE is a general-purpose graph mining system that provides a generic runtime to efficiently explore subgraph structures of interest and perform various graph mining analyses. It takes a 'pattern-aware' approach by incorporating a pattern-based programming model along with efficient pattern matching strategies. The programming model enables easier expression of complex graph mining use cases and enables PEREGRINE to extract the semantics of patterns. By analyzing the patterns, PEREGRINE generates efficient exploration plans which it uses to guide its subgraph exploration. In this paper, we present an in-depth view of the patternanalysis techniques powering the matching engine of PEREGRINE. Beyond the theoretical foundations from prior research, we expose opportunities based on how the exploration plans are evaluated, and develop key techniques for computation reuse, enumeration depth reduction, and branch elimination. Our experiments show the importance of patternawareness for scalable and performant graph mining where the presented new techniques speed up the performance by up to two orders of magnitude on top of the benefits achieved from the prior theoretical foundations that generate the initial exploration plans.


2021 ◽  
Author(s):  
Jalal Mohammad Chikhe

Due to the reduction of transistor size, modern circuits are becoming more sensitive to soft errors. The development of new techniques and algorithms targeting soft error detection are important as they allow designers to evaluate the weaknesses of the circuits at an early stage of the design. The project presents an optimized implementation of soft error detection simulator targeting combinational circuits. The developed simulator uses advanced switch level models allowing the injection of soft errors caused by single event-transient pulses with magnitudes lesser than the logic threshold. The ISCAS'85 benchmark circuits are used for the simulations. The transients can be injected at drain, gate, or inputs of logic gate. This gives clear indication of the importance of transient injection location on the fault coverage. Furthermore, an algorithm is designed and implemented in this work to increase the performance of the simulator. This optimized version of the simulator achieved an average speed-up of 310 compared to the non-algorithm based version of the simulator.


1962 ◽  
Vol 11 (02) ◽  
pp. 137-143
Author(s):  
M. Schwarzschild

It is perhaps one of the most important characteristics of the past decade in astronomy that the evolution of some major classes of astronomical objects has become accessible to detailed research. The theory of the evolution of individual stars has developed into a substantial body of quantitative investigations. The evolution of galaxies, particularly of our own, has clearly become a subject for serious research. Even the history of the solar system, this close-by intriguing puzzle, may soon make the transition from being a subject of speculation to being a subject of detailed study in view of the fast flow of new data obtained with new techniques, including space-craft.


Author(s):  
M.A. Parker ◽  
K.E. Johnson ◽  
C. Hwang ◽  
A. Bermea

We have reported the dependence of the magnetic and recording properties of CoPtCr recording media on the thickness of the Cr underlayer. It was inferred from XRD data that grain-to-grain epitaxy of the Cr with the CoPtCr was responsible for the interaction observed between these layers. However, no cross-sectional TEM (XTEM) work was performed to confirm this inference. In this paper, we report the application of new techniques for preparing XTEM specimens from actual magnetic recording disks, and for layer-by-layer micro-diffraction with an electron probe elongated parallel to the surface of the deposited structure which elucidate the effect of the crystallographic structure of the Cr on that of the CoPtCr.XTEM specimens were prepared from magnetic recording disks by modifying a technique used to prepare semiconductor specimens. After 3mm disks were prepared per the standard XTEM procedure, these disks were then lapped using a tripod polishing device. A grid with a single 1mmx2mm hole was then glued with M-bond 610 to the polished side of the disk.


Author(s):  
Brian Cross

A relatively new entry, in the field of microscopy, is the Scanning X-Ray Fluorescence Microscope (SXRFM). Using this type of instrument (e.g. Kevex Omicron X-ray Microprobe), one can obtain multiple elemental x-ray images, from the analysis of materials which show heterogeneity. The SXRFM obtains images by collimating an x-ray beam (e.g. 100 μm diameter), and then scanning the sample with a high-speed x-y stage. To speed up the image acquisition, data is acquired "on-the-fly" by slew-scanning the stage along the x-axis, like a TV or SEM scan. To reduce the overhead from "fly-back," the images can be acquired by bi-directional scanning of the x-axis. This results in very little overhead with the re-positioning of the sample stage. The image acquisition rate is dominated by the x-ray acquisition rate. Therefore, the total x-ray image acquisition rate, using the SXRFM, is very comparable to an SEM. Although the x-ray spatial resolution of the SXRFM is worse than an SEM (say 100 vs. 2 μm), there are several other advantages.


Author(s):  
P. Pradère ◽  
J.F. Revol ◽  
R. St. John Manley

Although radiation damage is the limiting factor in HREM of polymers, new techniques based on low dose imaging at low magnification have permitted lattice images to be obtained from very radiation sensitive polymers such as polyethylene (PE). This paper describes the computer averaging of P4MP1 lattice images. P4MP1 is even more sensitive than PE (total end point dose of 27 C m-2 as compared to 100 C m-2 for PE at 120 kV). It does, however, have the advantage of forming flat crystals from dilute solution and no change in d-spacings is observed during irradiation.Crystals of P4MP1 were grown at 60°C in xylene (polymer concentration 0.05%). Electron microscopy was performed with a Philips EM 400 T microscope equipped with a Low Dose Unit and operated at 120 kV. Imaging conditions were the same as already described elsewhere. Enlarged micrographs were digitized and processed with the Spider image processing system.


Sign in / Sign up

Export Citation Format

Share Document