random test
Recently Published Documents


TOTAL DOCUMENTS

144
(FIVE YEARS 21)

H-INDEX

17
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Dai Duong Tran ◽  
Thi Giang Truong ◽  
Truong Giang Do ◽  
The Duc Do
Keyword(s):  

Author(s):  
Vishnupriya Shivakumar ◽  
◽  
C. Senthilpari ◽  
Zubaida Yusoff ◽  
◽  
...  

A linear feedback shift register (LFSR) has been frequently used in the Built-in Self-Test (BIST) designs for the pseudo-random test pattern generation. The higher volume of the test patterns and the lower test power consumption are the key features in the large complex designs. The motivation of this study is to generate efficient pseudo-random test patterns by the proposed LFSR and to be applied in the BIST designs. For the BIST designs, the proposed LFSR satisfied with the main strategies such as re-seeding and lesser test power consumption. However, the reseeding approach was utilized by the maximum-length pseudo-random test patterns. The objective of this paper is to propose a new LFSR circuit based on the proposed Reed-Solomon (RS) algorithm. The RS algorithm is created by considering the factors of the maximum length test patterns with a minimum distance over the time t. Also, it has been achieved an effective generation of test patterns over a stage of complexity order O (m log2 m), where m denotes the total number of message bits. We analysed our RS LFSR mathematically using the feedback polynomial function to decrease the area overhead occupied in the designs. The simulation works of the proposed RS LFSR bit-wise stages are simulated using the TSMC 130 nm on the Mentor Graphics IC design platform. Experimental results showed that the proposed LFSR achieved the effective pseudo-random test patterns with a lower test power consumption of 25.13 µW and 49.9 µs. In addition, proposed LFSR along with existing authors’ LFSR are applied in the BIST design to examine their power consumption. Ultimately, overall simulations operated with the highest operating frequency environment as 1.9 GHz.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1921
Author(s):  
Qian Lyu ◽  
Dalin Zhang ◽  
Rihan Da ◽  
Hailong Zhang

Coverage-guided greybox fuzzing aims at generating random test inputs to trigger vulnerabilities in target programs while achieving high code coverage. In the process, the scale of testing gradually becomes larger and more complex, and eventually, the fuzzer runs into a saturation state where new vulnerabilities are hard to find. In this paper, we propose a fuzzer, ReFuzz, that acts as a complement to existing coverage-guided fuzzers and a remedy for saturation. This approach facilitates the generation of inputs that lead only to covered paths by omitting all other inputs, which is exactly the opposite of what existing fuzzers do. ReFuzz takes the test inputs generated from the regular saturated fuzzing process and continue to explore the target program with the goal of preserving the code coverage. The insight is that coverage-guided fuzzers tend to underplay already covered execution paths during fuzzing when seeking to reach new paths, causing covered paths to be examined insufficiently. In our experiments, ReFuzz discovered tens of new unique crashes that AFL failed to find, of which nine vulnerabilities were submitted and accepted to the CVE database.


2021 ◽  
Vol 10 (1) ◽  
pp. 11-17
Author(s):  
Deniz Simsek ◽  
◽  
Caner Ozboke ◽  
Ela Arican Gultekin ◽  
◽  
...  

Dual-tasks are often used with postural control. These tasks, which generally target motor skills and cognitive performance, also help to determine the individual’s postural control. The purpose of this study is to determine the changes in performance during the motor task, which includes the cognitive cues of the hearing-impaired athletes. A total of 31 hearing-impaired athletes (male=19, female=12) and 34 hearing-impaired sedentary people (male=18, female=16) were included voluntarily in the study. The FitLight Trainer™ system was used to determine participants’ reaction time levels. The performance time of hearing-impaired male athletes was significantly lower than the hearing-impaired sedentary men in each of the three tests (Random Test: t = 4, 089, p <0.05; Cue Test: t = 3,551, p <0.05; Mixed Cue Test: t = 2, 393, p<0.05). The performance time of hearing-impaired female athletes was statistically significantly lower than that of sedentary hearing-impaired females for all protocols (Random Test: t=2,586, p<0,05; Cue Test: t=2,568, p<0.05; Mixed Cue Test: t=2,899, p<0.05). This study demonstrates that 1) hearing-impaired athletes perform postural control adjustments automatically during the motor task, and they require minimal less cognitive effort than they need to be minimally considered; 2) regular physical activities and training showed a positive development on other systems, especially the proprioceptive system, which controls balance. In future studies, dual-task reaction time values and postural control strategy comparisons should be measured among hearing-impaired athletes and athletes who do not have a hearing disability.


Author(s):  
Anjiang Wei ◽  
Pu Yi ◽  
Tao Xie ◽  
Darko Marinov ◽  
Wing Lam

AbstractSoftware developers frequently check their code changes by running a set of tests against their code. Tests that can nondeterministically pass or fail when run on the same code version are called flaky tests. These tests are a major problem because they can mislead developers to debug their recent code changes when the failures are unrelated to these changes. One prominent category of flaky tests is order-dependent (OD) tests, which can deterministically pass or fail depending on the order in which the set of tests are run. By detecting OD tests in advance, developers can fix these tests before they change their code. Due to the high cost required to explore all possible orders (n! permutations for n tests), prior work has developed tools that randomize orders to detect OD tests. Experiments have shown that randomization can detect many OD tests, and that most OD tests depend on just one other test to fail. However, there was no analysis of the probability that randomized orders detect OD tests. In this paper, we present the first such analysis and also present a simple change for sampling random test orders to increase the probability. We finally present a novel algorithm to systematically explore all consecutive pairs of tests, guaranteeing to detect all OD tests that depend on one other test, while running substantially fewer orders and tests than simply running all test pairs.


Author(s):  
Harrison Goldstein ◽  
John Hughes ◽  
Leonidas Lampropoulos ◽  
Benjamin C. Pierce

AbstractProperty-based testing uses randomly generated inputs to validate high-level program specifications. It can be shockingly effective at finding bugs, but it often requires generating a very large number of inputs to do so. In this paper, we apply ideas from combinatorial testing, a powerful and widely studied testing methodology, to modify the distributions of our random generators so as to find bugs with fewer tests. The key concept is combinatorial coverage, which measures the degree to which a given set of tests exercises every possible choice of values for every small combination of input features.In its “classical” form, combinatorial coverage only applies to programs whose inputs have a very particular shape—essentially, a Cartesian product of finite sets. We generalize combinatorial coverage to the richer world of algebraic data types by formalizing a class of sparse test descriptions based on regular tree expressions. This new definition of coverage inspires a novel combinatorial thinning algorithm for improving the coverage of random test generators, requiring many fewer tests to catch bugs. We evaluate this algorithm on two case studies, a typed evaluator for System F terms and a Haskell compiler, showing significant improvements in both.


2020 ◽  
Vol 65 (2) ◽  
pp. 78
Author(s):  
C.M. Tiutin ◽  
M.-T. Trifan ◽  
A. Vescan

Changes in the software necessitate confirmation testing and regression testing to be applied since new errors may be introduced with the modification. Test case prioritization is one method that could be applied to optimize which test cases should be executed first, involving how to schedule them in a certain order that detect faults as soon as possible.The main aim of our paper is to propose a test case prioritization technique by considering defect prediction as a criteria for prioritization in addition to the standard approach which considers the number of discovered faults. We have performed several experiments, considering only faults and the defect prediction values for each class. We compare our approach with random test case execution (for a theoretical example) and with the fault-based approach (for the Mockito project). The results are encouraging, for several class changes we obtained better results with our proposed hybrid approach.


Sign in / Sign up

Export Citation Format

Share Document