scholarly journals An exploratory study of the state of practice of performance testing in Java-based open source projects

Author(s):  
Philipp Leitner ◽  
Cor-Paul Bezemer

The usage of open source (OS) software is nowadays wide- spread across many industries and domains. While the functional quality of OS projects is considered to be up to par with that of closed-source software, much is unknown about the quality in terms of non-functional attributes, such as performance. One challenge for OS developers is that, unlike for functional testing, there is a lack of accepted best practices for performance testing. To reveal the state of practice of performance testing in OS projects, we conduct an exploratory study on 111 Java-based OS projects from GitHub. We study the performance tests of these projects from five perspectives: (1) the developers, (2) size, (3) organization and (4) types of performance tests and (5) the tooling used for performance testing. First, in a quantitative study we show that writing performance tests is not a popular task in OS projects: performance tests form only a small portion of the test suite, are rarely updated, and are usually maintained by a small group of core project developers. Second, we show through a qualitative study that even though many projects are aware that they need performance tests, developers appear to struggle implementing them. We argue that future performance testing frameworks should provider better support for low-friction testing, for instance via non-parameterized methods or performance test generation, as well as focus on a tight integration with standard continuous integration tooling.

2016 ◽  
Author(s):  
Philipp Leitner ◽  
Cor-Paul Bezemer

The usage of open source (OS) software is nowadays wide- spread across many industries and domains. While the functional quality of OS projects is considered to be up to par with that of closed-source software, much is unknown about the quality in terms of non-functional attributes, such as performance. One challenge for OS developers is that, unlike for functional testing, there is a lack of accepted best practices for performance testing. To reveal the state of practice of performance testing in OS projects, we conduct an exploratory study on 111 Java-based OS projects from GitHub. We study the performance tests of these projects from five perspectives: (1) the developers, (2) size, (3) organization and (4) types of performance tests and (5) the tooling used for performance testing. First, in a quantitative study we show that writing performance tests is not a popular task in OS projects: performance tests form only a small portion of the test suite, are rarely updated, and are usually maintained by a small group of core project developers. Second, we show through a qualitative study that even though many projects are aware that they need performance tests, developers appear to struggle implementing them. We argue that future performance testing frameworks should provider better support for low-friction testing, for instance via non-parameterized methods or performance test generation, as well as focus on a tight integration with standard continuous integration tooling.


Author(s):  
Philipp Leitner ◽  
Cor-Paul Bezemer

The usage of open source (OS) software is nowadays wide- spread across many industries and domains. While the functional quality of OS projects is considered to be up to par with that of closed-source software, much is unknown about the quality in terms of non-functional attributes, such as performance. One challenge for OS developers is that, unlike for functional testing, there is a lack of accepted best practices for performance testing. To reveal the state of practice of performance testing in OS projects, we conduct an exploratory study on 111 Java-based OS projects from GitHub. We study the performance tests of these projects from five perspectives: (1) the developers, (2) size, (3) organization and (4) types of performance tests and (5) the tooling used for performance testing. First, in a quantitative study we show that writing performance tests is not a popular task in OS projects: performance tests form only a small portion of the test suite, are rarely updated, and are usually maintained by a small group of core project developers. Second, we show through a qualitative study that even though many projects are aware that they need performance tests, developers appear to struggle implementing them. We argue that future performance testing frameworks should provider better support for low-friction testing, for instance via non-parameterized methods or performance test generation, as well as focus on a tight integration with standard continuous integration tooling.


2019 ◽  
Vol 2 (3) ◽  
pp. 28
Author(s):  
Elena Markoska ◽  
Aslak Johansen ◽  
Mikkel Baun Kjærgaard ◽  
Sanja Lazarova-Molnar ◽  
Muhyiddine Jradi ◽  
...  

Performance testing of components and subsystems of buildings is a promising practice for increasing energy efficiency and closing gaps between intended and actual performance of buildings. A typical shortcoming of performance testing is the difficulty of linking a failing test to a faulty or underperforming component. Furthermore, a failing test can also be linked to a wrongly configured performance test. In this paper, we present Building Metadata Performance Testing (BuMPeT), a method that addresses this shortcoming by using building metadata models to extend performance testing with fault detection and diagnostics (FDD) capabilities. We present four different procedures that apply BuMPeT to different data sources and components. We have applied the proposed method to a case study building, located in Denmark, to test its capacity and benefits. Additionally, we use two real case scenarios to showcase examples of failing performance tests in the building, as well as discovery of causes of underperformance. Finally, to examine the limits to the benefits of the applied procedure, a detailed elaboration of a hypothetical scenario is presented. Our findings demonstrate that the method has potential and it can serve to increase the energy efficiency of a wide range of buildings.


2020 ◽  
Vol 19 (1) ◽  
pp. 5-13 ◽  
Author(s):  
Antonio Bucchiarone ◽  
Jordi Cabot ◽  
Richard F. Paige ◽  
Alfonso Pierantonio

AbstractIn 2017 and 2018, two events were held—in Marburg, Germany, and San Vigilio di Marebbe, Italy, respectively—focusing on an analysis of the state of research, state of practice, and state of the art in model-driven engineering (MDE). The events brought together experts from industry, academia, and the open-source community to assess what has changed in research in MDE over the last 10 years, what challenges remain, and what new challenges have arisen. This article reports on the results of those meetings, and presents a set of grand challenges that emerged from discussions and synthesis. These challenges could lead to research initiatives for the community going forward.


1967 ◽  
Vol 24 (5) ◽  
pp. 1117-1153 ◽  
Author(s):  
R. A. Bams

Two methods of performance testing were developed to measure differences in stamina in four groups of sockeye migrant fry, all of the Lakelse Lake (Skeena River, B.C.) stock. The four groups differed only in methods of incubation: one group was naturally propagated, the other three artificially. The results of the swimming performance tests and the vulnerability to predation tests agree closely, and analysis shows that the key factor responsible for differences in performance is size of the fish. Ranked in decreasing order of performance these four groups rate as follows: naturally propagated fish, fish incubated in gravel from time of hatching, fish incubated in gravel only for the last few weeks as premigrants, and fish that spent their entire incubation period without gravel in hatchery baskets. Independent of size is the influence of condition (K-factor) of the fish, optimum performance occurring at the time of almost complete yolk absorption. Of the two methods the swimming performance test was found to be more sensitive and is recommended as a tool for comparative "quality testing" of fish stocks.


Author(s):  
Douglas H. Harris

This study examined the effect of subdividing grading units on performance test reliability. That is, instead of increasing test length by adding grading units comparable to existing grading units, this experimental approach attempted to increase test length, and hence reliability, by subdividing existing grading units into comparable subunits. The effect of subdividing grading units was assessed empirically using a performance test of the ultrasonic detection of cracks in pipe welds. Five-hour performance tests involving the examination of 10 pipe-weld specimens were completed by each of 52 experienced ultrasonic operators as part of their qualification for performing tasks of this type in nuclear power plants. Subdivision of grading units was found to increase the reliability of the test from 0.28 to 0.92, to decrease the standard error of measurement of the test from 13.81 to 1.35, and to decrease the 90% confidence band around test scores from ± 22.60 to ±2.20. Moreover, the increased reliability was predicted by the Spearman-Brown Prophecy Formula, the method commonly employed for predicting the effect of increased length on test reliability.


Sign in / Sign up

Export Citation Format

Share Document