DESIGN AND MODELING IN THE SOFTWARE PERFORMANCE ENGINEERING DEVELOPMENT PROCESS

2010 ◽  
Vol 19 (01) ◽  
pp. 307-323 ◽  
Author(s):  
SALVATORE DISTEFANO ◽  
ANTONIO PULIAFITO ◽  
MARCO SCARPA

Performance related problems are becoming more and more strategic in the software development, especially recently with the advent of Web Services and related business-oriented composition techniques (software as a service, Web 2.0, orchestration, choreography, etc.). In particular, an early integration of performance specifications in the SDP has been recognized during the last few years as an effective approach to improve the overall quality of a software. The goal of our work is the definition of a software development process that integrates performance evaluation and prediction. The software performance engineering development process (SPEDP) we specify is focused on performance, which plays a key role driving the software development process, thus implementing a performance/QoS-driven (software) development process. More specifically, in this paper our aim is to formally define the SPEDP design process, posing particular interest on the basis, on the first step of SPEDP, the software/system architecture design, modeling and/or representation. We define both the diagrams to use and show how to model the structure of the software architecture, its behavior and performance requirements. This is the first mandatory step for the automation of the SPEDP into a specific tool, which we have partially implemented as a performance plug-in for ArgoUML, ArgoPerformance.

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3480
Author(s):  
Walter Takashi Nakamura ◽  
Iftekhar Ahmed ◽  
David Redmiles ◽  
Edson Oliveira ◽  
David Fernandes ◽  
...  

The success of a software application is related to users’ willingness to keep using it. In this sense, evaluating User eXperience (UX) became an important part of the software development process. Researchers have been carrying out studies by employing various methods to evaluate the UX of software products. Some studies reported varied and even contradictory results when applying different UX evaluation methods, making it difficult for practitioners to identify which results to rely upon. However, these works did not evaluate the developers’ perspectives and their impacts on the decision process. Moreover, such studies focused on one-shot evaluations, which cannot assess whether the methods provide the same big picture of the experience (i.e., deteriorating, improving, or stable). This paper presents a longitudinal study in which 68 students evaluated the UX of an online judge system by employing AttrakDiff, UEQ, and Sentence Completion methods at three moments along a semester. This study reveals contrasting results between the methods, which affected developers’ decisions and interpretations. With this work, we intend to draw the HCI community’s attention to the contrast between different UX evaluation methods and the impact of their outcomes in the software development process.


Sign in / Sign up

Export Citation Format

Share Document