Identification of patterns in the involvement of novice software developers in software testing processes

Author(s):  
Gustavo Caiza ◽  
Fernando Ibarra Torres ◽  
Marcelo V. Garcia
2011 ◽  
Vol 47 ◽  
Author(s):  
Stefan Gruner ◽  
Johan Van Zyl

Small software companies make for the majority of software companies around the world, but their software development processes are often not as clearly defined and structured as in their larger counterparts. Especially the test process is often the most neglected part of the software process. This contribution analyses the software testing process in a small South African IT company, here called X, to determine the problems that currently cause it to deliver software fraught with too many defects. The findings of a survey conducted with all software developers in company X are discussed, and several typical problems are identified. We also discuss two prevalent test process improvement models that can be used to reason about the possibilities of process improvement. Solutions to those (or similar) problems often already exist, but a major part of the problem addressed in this contribution is the unawareness, or unfamiliarity, of many small industrial software developers and IT managers as far as the scientific literature on software science and engineering, and especially in our case: software testing, is concerned.


2012 ◽  
Vol 198-199 ◽  
pp. 572-576
Author(s):  
Li Zhao ◽  
Ke Yong Li

Software developers have developed many effective programming methods. However, software errors are still inevitable in software products. Estimating the number of errors in software testing is important. It is an important parameter to calculate MTTF.Because the number of errors in program directly determines the reliability of the software, in order to improve the quality and reliability of software products, we need to eliminate the errors in software products. To do this, we must understand the types of errors and we should also know the number of errors in the software . three methods of estimating errors are listed in this paper.


2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Ali M. Alakeel

Software testing is a very labor intensive and costly task. Therefore, many software testing techniques to automate the process of software testing have been reported in the literature. Assertion-Based automated software testing has been shown to be effective in detecting program faults as compared to traditional black-box and white-box software testing methods. However, the applicability of this approach in the presence of large numbers of assertions may be very costly. Therefore, software developers need assistance while making decision to apply Assertion-Based testing in order for them to get the benefits of this approach at an acceptable level of costs. In this paper, we present an Assertion-Based testing metrics technique that is based on fuzzy logic. The main goal of the proposed technique is to enhance the performance of Assertion-Based software testing in the presence of large numbers of assertions. To evaluate the proposed technique, an experimental study was performed in which the proposed technique is applied on programs with assertions. The result of this experiment shows that the effectiveness and performance of Assertion-Based software testing have improved when applying the proposed testing metrics technique.


2022 ◽  
Vol 31 (1) ◽  
pp. 1-74
Author(s):  
Owain Parry ◽  
Gregory M. Kapfhammer ◽  
Michael Hilton ◽  
Phil McMinn

Tests that fail inconsistently, without changes to the code under test, are described as flaky . Flaky tests do not give a clear indication of the presence of software bugs and thus limit the reliability of the test suites that contain them. A recent survey of software developers found that 59% claimed to deal with flaky tests on a monthly, weekly, or daily basis. As well as being detrimental to developers, flaky tests have also been shown to limit the applicability of useful techniques in software testing research. In general, one can think of flaky tests as being a threat to the validity of any methodology that assumes the outcome of a test only depends on the source code it covers. In this article, we systematically survey the body of literature relevant to flaky test research, amounting to 76 papers. We split our analysis into four parts: addressing the causes of flaky tests, their costs and consequences, detection strategies, and approaches for their mitigation and repair. Our findings and their implications have consequences for how the software-testing community deals with test flakiness, pertinent to practitioners and of interest to those wanting to familiarize themselves with the research area.


Author(s):  
Jonathan Jacky ◽  
Margus Veanes ◽  
Colin Campbell ◽  
Wolfram Schulte
Keyword(s):  

Author(s):  
Rupali A. Mahajan

The aim of this qualitative study was to investigate and comprehend the conditions that impact software cost, requirement tracking as well as scheduled software testing as an online administration and inspire essential exploration issues. Interviews were led with administrators from five associations. Thestudy utilized qualitative grounded hypothesis as its exploration system. The effects show that the interest for software testing and online requirement monitoring as an online administration is on the ascent and is impacted by conditions, for example, the level of area information required to adequately test a provision, adaptability and expense adequacy as profits, security and estimating as top prerequisites, cloud computing as the project monitor mode and the need for software analyzers to sharpen their abilities. Potential e x p l o r a t i o n territories recommended incorporate requisition regions best suited for online software testing, estimating and treatment of test information among others. The key issue is to monitor client’s requirements, track those requirements and also make it bug free and to avoid requirement gold plating issue. This study will present latest i d e a a b o u t online r e q u i r e m e n t monitoring and software testing.


Sign in / Sign up

Export Citation Format

Share Document