OPTIMAL BASIC BLOCK INSTRUCTION SCHEDULING FOR MULTIPLE-ISSUE PROCESSORS USING CONSTRAINT PROGRAMMING

2008 ◽  
Vol 17 (01) ◽  
pp. 37-54 ◽  
Author(s):  
ABID M. MALIK ◽  
JIM McINNES ◽  
PETER VAN BEEK

Instruction scheduling is one of the most important steps for improving the performance of object code produced by a compiler. A fundamental problem that arises in instruction scheduling is to find a minimum length schedule for a basic block — a straight-line sequence of code with a single entry point and a single exit point — subject to precedence, latency, and resource constraints. Solving the problem exactly is NP-complete, and heuristic approaches are currently used in most compilers. In contrast, we present a scheduler that finds provably optimal schedules for basic blocks using techniques from constraint programming. In developing our optimal scheduler, the key to scaling up to large, real problems was in the development of preprocessing techniques for improving the constraint model. We experimentally evaluated our optimal scheduler on the SPEC 2000 integer and floating point benchmarks. On this benchmark suite, the optimal scheduler was very robust — all but a handful of the hundreds of thousands of basic blocks in our benchmark suite were solved optimally within a reasonable time limit — and scaled to the largest basic blocks, including basic blocks with up to 2600 instructions. This compares favorably to the best previous exact approaches.

1993 ◽  
Vol 2 (3) ◽  
pp. 1-5
Author(s):  
Martin Charles Golumbic ◽  
Vladimir Rainish

Instruction scheduling algorithms are used in compilers to reduce run-time delays for the compiled code by the reordering or transformation of program statements, usually at the intermediate language or assembly code level. Considerable research has been carried out on scheduling code within the scope of basic blocks, i.e., straight line sections of code, and very effective basic block schedulers are now included in most modern compilers and especially for pipeline processors. In previous work Golumbic and Rainis: IBM J. Res. Dev., Vol. 34, pp.93–97, 1990, we presented code replication techniques for scheduling beyond the scope of basic blocks that provide reasonable improvements of running time of the compiled code, but which still leaves room for further improvement. In this article we present a new method for scheduling beyond basic blocks called SHACOOF. This new technique takes advantage of a conventional, high quality basic block scheduler by first suppressing selected subsequences of instructions and then scheduling the modified sequence of instructions using the basic block scheduler. A candidate subsequence for suppression can be found by identifying a region of a program control flow graph, called an S-region, which has a unique entry and a unique exit and meets predetermined criteria. This enables scheduling of a sequence of instructions beyond basic block boundaries, with only minimal changes to an existing compiler, by identifying beneficial opportunities to cover delays that would otherwise have been beyond its scope.


Author(s):  
Dehong Qiu ◽  
Jialin Sun ◽  
Hao Li

Measuring program similarity plays an important role in solving many problems in software engineering. However, because programs are instruction sequences with complex structures and semantic functions and furthermore, programs may be obfuscated deliberately through semantics-preserving transformations, measuring program similarity is a difficult task that has not been adequately addressed. In this paper, we propose a new approach to measuring Java program similarity. The approach first measures the low-level similarity between basic blocks according to the bytecode instruction sequences and the structural property of the basic blocks. Then, an error-tolerant graph matching algorithm that can combat structure transformations is used to match the Control Flow Graphs (CFG) based on the basic block similarity. The high-level similarity between Java programs is subsequently calculated on the matched pairs of the independent paths extracted from the optimal CFG matching. The proposed CFG-Match approach is compared with a string-based approach, a tree-based approach and a graph-based approach. Experimental results show that the CFG-Match approach is more accurate and robust against semantics-preserving transformations. The CFG-Match approach is used to detect Java program plagiarism. Experiments on the collection of benchmark program pairs collected from the students’ submission of project assignments demonstrate that the CFG-Match approach outperforms the comparative approaches in the detection of Java program plagiarism.


2017 ◽  
Vol 2017 ◽  
pp. 1-16 ◽  
Author(s):  
Aparna Prayag ◽  
Sanjay Bodkhe

In this paper a basic block of novel topology of multilevel inverter is proposed. The proposed approach significantly requires reduced number of dc voltage sources and power switches to attain maximum number of output voltage levels. By connecting basic blocks in series a cascaded multilevel topology is developed. Each block itself is also a multilevel inverter. Analysis of proposed topology is carried out in symmetric as well as asymmetric operating modes. The topology is investigated through computer simulation using MATLAB/Simulink and validated experimentally on prototype in the laboratory.


2007 ◽  
Vol 14 (6) ◽  
pp. 549-569 ◽  
Author(s):  
Abid M. Malik ◽  
Tyrel Russell ◽  
Michael Chase ◽  
Peter van Beek

2014 ◽  
Vol 536-537 ◽  
pp. 489-493
Author(s):  
Long Chen ◽  
Xiao Yin Yi

To extend the flexibility of data integrity verification method,adapted to the different verification environment, proposed an improved solution that can support multi-granularity.It organizes files into three kinds of granularity such as data blocks,data sub-blocks and basic-blocks,basic-block realize data gathered to form data sub-block.Sign in the data sub-block,using signature of the sub-block to generate signature of block. Improvement program can achieve the verification of data blocks and sub-blocks. Validation of data block can reduce the data traffic in the validation process,two particle combination can improve the overall efficiency.In the proposed layered merkel hash tree is put forward,the dynamic operation can be supported by the sub-block or the block.Securitycommunication performance analysis show that the improvement program is effective and has a better practicability


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Jiageng Yang ◽  
Xinguo Zhang ◽  
Hui Lu ◽  
Muhammad Shafiq ◽  
Zhihong Tian

The root cause of the insecurity for smart devices is the potential vulnerabilities in smart devices. There are many approaches to find the potential bugs in smart devices. Fuzzing is the most effective vulnerability finding technique, especially the coverage-guided fuzzing. The coverage-guided fuzzing identifies the high-quality seeds according to the corresponding code coverage triggered by these seeds. Existing coverage-guided fuzzers consider that the higher the code coverage of seeds, the greater the probability of triggering potential bugs. However, in real-world applications running on smart devices or the operation system of the smart device, the logic of these programs is very complex. Basic blocks of these programs play a different role in the process of application exploration. This observation is ignored by existing seed selection strategies, which reduces the efficiency of bug discovery on smart devices. In this paper, we propose a contribution-aware coverage-guided fuzzing, which estimates the contributions of basic blocks for the process of smart device exploration. According to the control flow of the target on any smart device and the runtime information during the fuzzing process, we propose the static contribution of a basic block and the dynamic contribution built on the execution frequency of each block. The contribution-aware optimization approach does not require any prior knowledge of the target device, which ensures our optimization adapting gray-box fuzzing and white-box fuzzing. We designed and implemented a contribution-aware coverage-guided fuzzer for smart devices, called StFuzzer. We evaluated StFuzzer on four real-world applications that are often applied on smart devices to demonstrate the efficiency of our contribution-aware optimization. The result of our trials shows that the contribution-aware approach significantly improves the capability of bug discovery and obtains better execution speed than state-of-the-art fuzzers.


2021 ◽  
Vol 7 ◽  
pp. e454
Author(s):  
HyunJin Kim

This article proposes a novel network model to achieve better accurate residual binarized convolutional neural networks (CNNs), denoted as AresB-Net. Even though residual CNNs enhance the classification accuracy of binarized neural networks with increasing feature resolution, the degraded classification accuracy is still the primary concern compared with real-valued residual CNNs. AresB-Net consists of novel basic blocks to amortize the severe error from the binarization, suggesting a well-balanced pyramid structure without downsampling convolution. In each basic block, the shortcut is added to the convolution output and then concatenated, and then the expanded channels are shuffled for the next grouped convolution. In the downsampling when stride >1, our model adopts only the max-pooling layer for generating low-cost shortcut. This structure facilitates the feature reuse from the previous layers, thus alleviating the error from the binarized convolution and increasing the classification accuracy with reduced computational costs and small weight storage requirements. Despite low hardware costs from the binarized computations, the proposed model achieves remarkable classification accuracies on the CIFAR and ImageNet datasets.


Sign in / Sign up

Export Citation Format

Share Document