scholarly journals Enhanced Performance of Adaptive Random Partitioning Testing by Unifying the ARPT-1 and ARPT-2 Strategies

The software testing is considered as the most powerful and important phase. Effective testing process will leads to more accurate and reliable results and high quality software products. Random testing (RT) is a major software testing strategy and their effortlessness makes them conceivable as the most efficient testing strategies concerning the time required for experiment determination, its significant drawback of RT is defect detection efficacy. This draw back has been beat by Adaptive Testing (AT), however AT is enclosed of computational complexity. One most important method for improving RT is Adaptive random testing (ART). Another class of testing strategies is partition testing is one of the standard software program checking out strategies, which involves dividing the enter domain up into a set number of disjoint partitions, and selecting take a look at cases from inside every partition The hybrid approach is a combination of AT and RPT that is already existing called as ARPT strategy. In ARPT the random partitioning is improved by introducing different clustering algorithms solves the parameter space of problem between the target method and objective function of the test data. In this way random partitioning is improved to reduce the time conception and complexity in ARPT testing strategies. The parameters of enhanced ARPT testing approaches are optimized by utilizing different optimization algorithms. The computational complexity of Optimized Improved ARPT (OIARPT) testing strategies is reduced by selecting the best test cases using Support Vector Machine (SVM). In this paper the testing strategies of Optimized Improved ARPT with SVM are unified and named as Unified ARPT (UARPT) which enhances the testing performance and reduces the time complexity to test software.

2019 ◽  
Vol 8 (3) ◽  
pp. 4265-4271

Software testing is an essential activity in software industries for quality assurance; subsequently, it can be effectively removing defects before software deployment. Mostly good software testing strategy is to accomplish the fundamental testing objective while solving the trade-offs between effectiveness and efficiency testing issues. Adaptive and Random Partition software Testing (ARPT) approach was a combination of Adaptive Testing (AT) and Random Partition Approach (RPT) used to test software effectively. It has two variants they are ARPT-1 and ARPT-2. In ARPT-1, AT was used to select a certain number of test cases and then RPT was used to select a number of test cases before returning to AT. In ARPT-2, AT was used to select the first m test cases and then switch to RPT for the remaining tests. The computational complexity for random partitioning in ARPT was solved by cluster the test cases using a different clustering algorithm. The parameters of ARPT-1 and ARPT-2 needs to be estimated for different software, it leads to high computation overhead and time consumption. It was solved by Improvised BAT optimization algorithms and this approach is named as Optimized ARPT1 (OARPT1) and OARPT2. By using all test cases in OARPT will leads to high time consumption and computational overhead. In order to avoid this problem, OARPT1 with Support Vector Machine (OARPT1-SVM) and OARPT2- SVM are introduced in this paper. The SVM is used for selection of best test cases for OARPT-1 and OARPT-2 testing strategy. The SVM constructs hyper plane in a multi-dimensional space which is used to separate test cases which have high code and branch coverage and test cases which have low code and branch coverage. Thus, the SVM selects the best test cases for OARPT-1 and OARPT-2. The selected test cases are used in OARPT-1 and OARPT-2 to test software. In the experiment, three different software is used to prove the effectiveness of proposed OARPT1- SVM and OARPT2-SVM testing strategies in terms of time consumption, defect detection efficiency, branch coverage and code coverage.


Author(s):  
KWOK PING CHAN ◽  
TSONG YUEH CHEN ◽  
DAVE TOWEY

Restricted Random Testing (RRT) is a new method of testing software that improves upon traditional Random Testing (RT) techniques. Research has indicated that failure patterns (portions of an input domain which, when executed, cause the program to fail or reveal an error) can influence the effectiveness of testing strategies. For certain types of failure patterns, it has been found that a widespread and even distribution of test cases in the input domain can be significantly more effective at detecting failure compared with ordinary RT. Testing methods based on RT, but which aim to achieve even and widespread distributions, have been called Adaptive Random Testing (ART) strategies. One implementation of ART is RRT. RRT uses exclusion zones around executed, but non-failure-causing, test cases to restrict the regions of the input domain from which subsequent test cases may be drawn. In this paper, we introduce the motivation behind RRT, explain the algorithm and detail some empirical analyses carried out to examine the effectiveness of the method. Two versions of RRT are presented: Ordinary RRT (ORRT) and Normalized RRT (NRRT). The two versions share the same fundamental algorithm, but differ in their treatment of non-homogeneous input domains. Investigations into the use of alternative exclusion shapes are outlined, and a simple technique for reducing the computational overheads of RRT, prompted by the alternative exclusion shape investigations, is also explained. The performance of RRT is compared with RT and another ART method based on maximized minimum test case separation (DART), showing excellent improvement over RT and a very favorable comparison with DART.


Author(s):  
RYO INOKUCHI ◽  
SADAAKI MIYAMOTO

Recently kernel methods in support vector machines have widely been used in machine learning algorithms to obtain nonlinear models. Clustering is an unsupervised learning method which divides whole data set into subgroups, and popular clustering algorithms such as c-means are employing kernel methods. Other kernel-based clustering algorithms have been inspired from kernel c-means. However, the formulation of kernel c-means has a high computational complexity. This paper gives an alternative formulation of kernel-based clustering algorithms derived from competitive learning clustering. This new formulation obviously uses sequential updating or on-line learning to avoid high computational complexity. We apply kernel methods to related algorithms: learning vector quantization and self-organizing map. We moreover consider kernel methods for sequential c-means and its fuzzy version by the proposed formulation.


Webology ◽  
2021 ◽  
Vol 18 (SI01) ◽  
pp. 75-87
Author(s):  
Samera Obaid Barraood ◽  
Haslina Mohd ◽  
Fauziah Baharom

Software testing is anessentialprocess for ensuring thequality and reliability of software products. The efficiency of testing activities depends largely on the test case quality, which is considered as one of the major concerns of software testing. Unfortunately, at the moment there is no clear guideline that can be referred by software testers in producing good quality test cases. Hence, producing guideline is certainly required. To construct a pragmatic guideline, it is crucial to identify the factors that lead todesigninggood quality test cases. The existing test case quality factors are not comprehensive and need further investigation and improvement. Therefore,a content analysis was conducted to identify the test case qualityfactors from software testing experts point of view available in the software testing websites. The software testing websites provide explicit information about the quality of test cases in order to avoid the poor design of test cases. Thus, this study presents the outcomes of content analysis from 22 software testing websites which comprise of static content websites and blogs.Consequently, eight (8)factors and their corresponding 30 sub-factors were identified. Among the factors are documentation, manageability, maintainability, reusability, requirement quality, efficiency, tester knowledge, and effectiveness of test cases. These factors are useful to be referred by the practitioners in assuring the quality of the design test cases which implicitly can ensure the quality of the software products.


2019 ◽  
Vol 8 (4) ◽  
pp. 10530-10535

For the reduction of cost in software testing we propose a novel technique for testing and classifying methods based on clustering methods for classifying test cases for powerful and non-viable groups. This technique is based on data treatment obtained by pre-release of program while testing. Here we introduce 2 new clustering algorithms such as centroid and hierarchical based clustering. The test study expresses that the experiment bunching results can be distinguished viably with high review proportion and noteworthy rate exactness. The present paper tells about the presentation of clustering which move towards by comparing and investigating the factors like criteria coverage, features of construction and pre-release faults quality.


Regression testing is a technique which is carried out to ascertain that the changes that were done in the source code have not negatively damped its performance. Hence, it is a crucial and an expensive step of the software development life cycle. It re-establishes confidence in correctness of the software after changes were made to it. A test suite is used to test the software, but often it becomes time consuming to re-execute each test case every time regression testing is done. Therefore, it becomes essential to decrease the number of the test cases by prioritizing them based on some criterion. This ensures maximum detection of faults in least amount of time. In this paper, author has compared swarm intelligence techniques with genetic algorithms for such a test suite prioritization. In particular, by taking a sample GCD program Ant Colony Optimization (ACO) has been compared with Genetic Algorithms (GA) for the purpose of test suite minimization. Unit of comparison has been execution time required for prioritization of test cases. Further, experimental results have been compared with time taken by both with random testing.


Author(s):  
M. Tanveer ◽  
Tarun Gupta ◽  
Miten Shah ◽  

Twin Support Vector Clustering (TWSVC) is a clustering algorithm inspired by the principles of Twin Support Vector Machine (TWSVM). TWSVC has already outperformed other traditional plane based clustering algorithms. However, TWSVC uses hinge loss, which maximizes shortest distance between clusters and hence suffers from noise-sensitivity and low re-sampling stability. In this article, we propose Pinball loss Twin Support Vector Clustering (pinTSVC) as a clustering algorithm. The proposed pinTSVC model incorporates the pinball loss function in the plane clustering formulation. Pinball loss function introduces favorable properties such as noise-insensitivity and re-sampling stability. The time complexity of the proposed pinTSVC remains equivalent to that of TWSVC. Extensive numerical experiments on noise-corrupted benchmark UCI and artificial datasets have been provided. Results of the proposed pinTSVC model are compared with TWSVC, Twin Bounded Support Vector Clustering (TBSVC) and Fuzzy c-means clustering (FCM). Detailed and exhaustive comparisons demonstrate the better performance and generalization of the proposed pinTSVC for noise-corrupted datasets. Further experiments and analysis on the performance of the above-mentioned clustering algorithms on structural MRI (sMRI) images taken from the ADNI database, face clustering, and facial expression clustering have been done to demonstrate the effectiveness and feasibility of the proposed pinTSVC model.


Mathematics ◽  
2021 ◽  
Vol 9 (15) ◽  
pp. 1779
Author(s):  
Wanida Khamprapai ◽  
Cheng-Fa Tsai ◽  
Paohsi Wang ◽  
Chi-En Tsai

Test case generation is an important process in software testing. However, manual generation of test cases is a time-consuming process. Automation can considerably reduce the time required to create adequate test cases for software testing. Genetic algorithms (GAs) are considered to be effective in this regard. The multiple-searching genetic algorithm (MSGA) uses a modified version of the GA to solve the multicast routing problem in network systems. MSGA can be improved to make it suitable for generating test cases. In this paper, a new algorithm called the enhanced multiple-searching genetic algorithm (EMSGA), which involves a few additional processes for selecting the best chromosomes in the GA process, is proposed. The performance of EMSGA was evaluated through comparison with seven different search-based techniques, including random search. All algorithms were implemented in EvoSuite, which is a tool for automatic generation of test cases. The experimental results showed that EMSGA increased the efficiency of testing when compared with conventional algorithms and could detect more faults. Because of its superior performance compared with that of existing algorithms, EMSGA can enable seamless automation of software testing, thereby facilitating the development of different software packages.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2450
Author(s):  
Fahd Alharithi ◽  
Ahmed Almulihi ◽  
Sami Bourouis ◽  
Roobaea Alroobaea ◽  
Nizar Bouguila

In this paper, we propose a novel hybrid discriminative learning approach based on shifted-scaled Dirichlet mixture model (SSDMM) and Support Vector Machines (SVMs) to address some challenging problems of medical data categorization and recognition. The main goal is to capture accurately the intrinsic nature of biomedical images by considering the desirable properties of both generative and discriminative models. To achieve this objective, we propose to derive new data-based SVM kernels generated from the developed mixture model SSDMM. The proposed approach includes the following steps: the extraction of robust local descriptors, the learning of the developed mixture model via the expectation–maximization (EM) algorithm, and finally the building of three SVM kernels for data categorization and classification. The potential of the implemented framework is illustrated through two challenging problems that concern the categorization of retinal images into normal or diabetic cases and the recognition of lung diseases in chest X-rays (CXR) images. The obtained results demonstrate the merits of our hybrid approach as compared to other methods.


Sign in / Sign up

Export Citation Format

Share Document