Simple Compression Code Supporting Random Access and Fast String Matching

Author(s):  
Kimmo Fredriksson ◽  
Fedor Nikitin
1995 ◽  
Vol 2 (46) ◽  
Author(s):  
Dany Breslauer ◽  
Livio Colussi ◽  
Laura Toniolo

In this paper we study the exact comparison complexity of the string<br />prefix-matching problem in the deterministic sequential comparison model<br />with equality tests. We derive almost tight lower and upper bounds on<br />the number of symbol comparisons required in the worst case by on-line<br />prefix-matching algorithms for any fixed pattern and variable text. Unlike<br />previous results on the comparison complexity of string-matching and<br />prefix-matching algorithms, our bounds are almost tight for any particular pattern.<br />We also consider the special case where the pattern and the text are the<br />same string. This problem, which we call the string self-prefix problem, is<br />similar to the pattern preprocessing step of the Knuth-Morris-Pratt string-matching<br />algorithm that is used in several comparison efficient string-matching<br />and prefix-matching algorithms, including in our new algorithm.<br />We obtain roughly tight lower and upper bounds on the number of symbol<br />comparisons required in the worst case by on-line self-prefix algorithms.<br />Our algorithms can be implemented in linear time and space in the<br />standard uniform-cost random-access-machine model.


Author(s):  
Z. Galil ◽  
I. Yudkiewicz

The string matching problem is defined as follows: given a string P0 ... Pm-1 called the pattern and a string T0 .. .Tn-1 called the text find all occurrences of the pattern in the text. The output of a string matching algorithm is a boolean array MATCH[0..n — 1] which contains a true value at each position where an occurrence of the pattern starts. Many sequential algorithms are known that solve this problem optimally, i.e., in a linear O(n) number of operations, most notable of which are the algorithms by Knuth, Morris and Pratt and by Boyer and Moore. In this chapter we limit ourselves to parallel algorithms. All algorithms considered in this chapter are for the parallel random access machine (PRAM) computation model. In the design of parallel algorithms for the various PRAM models, one tries to optimize two factors simultaneously: the number of processors used and the time required by the algorithm. The total number of operations performed, which is the time-processors product, is the measure of optimality. A parallel algorithm is called optimal if it needs the same number of operations as the fastest sequential algorithm. Hence, in the string matching problem, an algorithm is optimal if its time-processor product is linear in the length of the input strings. Apart from having an optimal algorithm the designer wishes the algorithm to be the fastest possible, where the only limit on the number of processors is the one caused by the time-processor product. The following fundamental lemma given by Brent is essential for understanding the tradeoff between time and processors : Any PRAM algoriihm of time t that consists of x elementary operations can be implemented on p processors in O(x/p + t) time. Using Brent’s lemma, any algorithm that uses a large number x of processors to run very fast can be implemented on p < x processors, with the same total work, however with an increase in time as described. A basic problem in the study of parallel algorithms for strings and arrays is finding the maximal/minimal position in an array that holds a certain value.


2016 ◽  
Author(s):  
Matteo Berioli ◽  
Giuseppe Cocco ◽  
Gianluigi Liva ◽  
Andrea Munari

2018 ◽  
Author(s):  
Tuba Kiyan ◽  
Heiko Lohrke ◽  
Christian Boit

Abstract This paper compares the three major semi-invasive optical approaches, Photon Emission (PE), Thermal Laser Stimulation (TLS) and Electro-Optical Frequency Mapping (EOFM) for contactless static random access memory (SRAM) content read-out on a commercial microcontroller. Advantages and disadvantages of these techniques are evaluated by applying those techniques on a 1 KB SRAM in an MSP430 microcontroller. It is demonstrated that successful read out depends strongly on the core voltage parameters for each technique. For PE, better SNR and shorter integration time are to be achieved by using the highest nominal core voltage. In TLS measurements, the core voltage needs to be externally applied via a current amplifier with a bias voltage slightly above nominal. EOFM can use nominal core voltages again; however, a modulation needs to be applied. The amplitude of the modulated supply voltage signal has a strong effect on the quality of the signal. Semi-invasive read out of the memory content is necessary in order to remotely understand the organization of memory, which finds applications in hardware and software security evaluation, reverse engineering, defect localization, failure analysis, chip testing and debugging.


Author(s):  
Srikanth Perungulam ◽  
Scott Wills ◽  
Greg Mekras

Abstract This paper illustrates a yield enhancement effort on a Digital Signal Processor (DSP) where random columns in the Static Random Access Memory (SRAM) were found to be failing. In this SRAM circuit, sense amps are designed with a two-stage separation and latch sequence. In the failing devices the bit line and bit_bar line were not separated far enough in voltage before latching got triggered. The design team determined that the sense amp was being turned on too quickly. The final conclusion was that a marginal sense amp design, combined with process deviations, would result in this type of failure. The possible process issues were narrowed to variations of via resistances on the bit and bit_bar lines. Scanning Electron Microscope (SEM) inspection of the the Focused Ion Beam (FIB) cross sections followed by Transmission Electron Microscopy (TEM) showed the presence of contaminants at the bottom of the vias causing resistance variations.


Author(s):  
Phil Schani ◽  
S. Subramanian ◽  
Vince Soorholtz ◽  
Pat Liston ◽  
Jamey Moss ◽  
...  

Abstract Temperature sensitive single bit failures at wafer level testing on 0.4µm Fast Static Random Access Memory (FSRAM) devices are analyzed. Top down deprocessing and planar Transmission Electron Microscopy (TEM) analyses show a unique dislocation in the substrate to be the cause of these failures. The dislocation always occurs at the exact same location within the bitcell layout with respect to the single bit failing data state. The dislocation is believed to be associated with buried contact processing used in this type of bitcell layout.


Sign in / Sign up

Export Citation Format

Share Document