Crowdsourced Web Application Testing Under Real-Time Constraints

Author(s):  
Shikai Guo ◽  
Rong Chen ◽  
Hui Li ◽  
Jian Gao ◽  
Yaqing Liu

Crowdsourcing carried out by cyber citizens instead of hired consultants and professionals has become increasingly an appealing solution to test the feature rich and interactive web. Despite having various online crowdsourcing testing services, the benefits of exposure to a wider audience and harnessing the collective efforts of individuals remain uncertain, especially when the quality control is problematic in an open environment. The objective of this paper is to propose a real-time collaborative testing approach (RCTA) to create a productive crowdsourced testing on a dynamic Internet. We implemented a prototype crowdsourcing system XTurk, and carried out a case study, to understand the crowdsourced testers behavior, the trustworthiness, the execution time of test cases and accuracy of feedback. Several experiments are carried out and experimental results validate the quality, efficiency and reliability of the present approach and the positive testing feedback is are shown to outperform the previous methods.

Author(s):  
Giorgio Metta ◽  
Lorenzo Natale ◽  
Shashank Pathak ◽  
Luca Pulina ◽  
Armando Tacchella
Keyword(s):  

2018 ◽  
Vol 7 (2.28) ◽  
pp. 332 ◽  
Author(s):  
Lei Xiao ◽  
Huaikou Miao ◽  
Ying Zhong

Regression testing is a very important activity in continuous integration development environments. Software engineers frequently integrate new or changed code that involves in a new regression testing. Furthermore, regression testing in continuous integration development environments is together with tight time constraints. It is also impossible to re-run all the test cases in regression testing. Test case prioritization and selection technique are often used to render continuous integration processes more cost-effective. According to multi objective optimization, we present a test case prioritization and selection technique, TCPSCI, to satisfy time constraints and achieve testing goals in continuous integration development environments. Based on historical failure data, testing coverage code size and testing execution time, we order and select test cases. The test cases of the maximize code coverage, the shorter execution time and revealing the latest faults have the higher priority in the same change request. The case study results show that using TCPSCI has a higher cost-effectiveness comparing to the manually prioritization.  


2021 ◽  
Author(s):  
Yasir Shoaib

Managing applications on the cloud requires extensive decision making on the part of the Application Provider (AP). When an application faces changing workload, the services of the application are either scaled up or down in response. The services run on Virtual Machines (VM) or container instances. APs decide on how the application scales through VM provisioning and the placement of the services on the VMs. Various drivers guide this decision making. Application performance and cost are two such drivers. This thesis answers the question of how APs can meet the performance constraints of their applications while minimizing the cost of the running VMs. Two versions of the problem are presented. The first version expects to meet mean response time constraints given a deployment configuration through the replication of VMs and addition of virtual processors. The presented solution is based on layered bottlenecks. A case study shows the solution meets response time constraints and uses fewer resources in comparison to a simple utilization based approach. The second version adds the minimization of cost as an objective, where VM-types having different cost rates are used. This problem does not require a deployment configuration and provides a complete solution, where resources can be added and removed. A novel solution based on the layered bottleneck strength value with genetic algorithm has been presented. For the case study, a decision maker is implemented for a web application. The proposed solution is compared with three algorithms, all of which run within the decision maker. The results from the case study show that the proposed solution provides shorter runtime than the exhaustive search, and is able to meet response time constraints with near optimal minimization of cost. The solution also results in better cost than a plain genetic algorithm and random search, at the expense of slightly longer runtime.


2021 ◽  
Author(s):  
Yasir Shoaib

Managing applications on the cloud requires extensive decision making on the part of the Application Provider (AP). When an application faces changing workload, the services of the application are either scaled up or down in response. The services run on Virtual Machines (VM) or container instances. APs decide on how the application scales through VM provisioning and the placement of the services on the VMs. Various drivers guide this decision making. Application performance and cost are two such drivers. This thesis answers the question of how APs can meet the performance constraints of their applications while minimizing the cost of the running VMs. Two versions of the problem are presented. The first version expects to meet mean response time constraints given a deployment configuration through the replication of VMs and addition of virtual processors. The presented solution is based on layered bottlenecks. A case study shows the solution meets response time constraints and uses fewer resources in comparison to a simple utilization based approach. The second version adds the minimization of cost as an objective, where VM-types having different cost rates are used. This problem does not require a deployment configuration and provides a complete solution, where resources can be added and removed. A novel solution based on the layered bottleneck strength value with genetic algorithm has been presented. For the case study, a decision maker is implemented for a web application. The proposed solution is compared with three algorithms, all of which run within the decision maker. The results from the case study show that the proposed solution provides shorter runtime than the exhaustive search, and is able to meet response time constraints with near optimal minimization of cost. The solution also results in better cost than a plain genetic algorithm and random search, at the expense of slightly longer runtime.


2020 ◽  
Vol 66 (8) ◽  
pp. 1072-1083 ◽  
Author(s):  
Andreas Bietenbeck ◽  
Mark A Cervinski ◽  
Alex Katayev ◽  
Tze Ping Loh ◽  
Huub H van Rossum ◽  
...  

Abstract Background Patient-based real-time quality control (PBRTQC) avoids limitations of traditional quality control methods based on the measurement of stabilized control samples. However, PBRTQC needs to be adapted to the individual laboratories with parameters such as algorithm, truncation, block size, and control limit. Methods In a computer simulation, biases were added to real patient results of 10 analytes with diverse properties. Different PBRTQC methods were assessed on their ability to detect these biases early. Results The simulation based on 460 000 historical patient measurements for each analyte revealed several recommendations for PBRTQC. Control limit calculation with “percentiles of daily extremes” led to effective limits and allowed specification of the percentage of days with false alarms. However, changes in measurement distribution easily increased false alarms. Box–Cox but not logarithmic transformation improved error detection. Winsorization of outlying values often led to a better performance than simple outlier removal. For medians and Harrell–Davis 50 percentile estimators (HD50s), no truncation was necessary. Block size influenced medians substantially and HD50s to a lesser extent. Conversely, a change of truncation limits affected means and exponentially moving averages more than a change of block sizes. A large spread of patient measurements impeded error detection. PBRTQC methods were not always able to detect an allowable bias within the simulated 1000 erroneous measurements. A web application was developed to estimate PBRTQC performance. Conclusions Computer simulations can optimize PBRTQC but some parameters are generally superior and can be taken as default.


Sign in / Sign up

Export Citation Format

Share Document