Computer-aided software development process design

1989 ◽  
Vol 15 (9) ◽  
pp. 1025-1037 ◽  
Author(s):  
C.Y. Lin ◽  
R.R. Levary
2015 ◽  
Vol 25 (09n10) ◽  
pp. 1747-1752 ◽  
Author(s):  
Guoyuan Liu ◽  
Zhi Li ◽  
Shilang Huang ◽  
Zhaofeng Ouyang ◽  
Zhe Liu

This paper presents a set of computer-aided tools for problem analysis in the software development process. Jackson’s problem diagrams are used to model the problem owners’ needs and relevant contexts for the software to be built. An algorithm based on three classes of rules is provided for the systematic transformation of these models into behavioral descriptions of the software. This work is part of our long-term research efforts aiming at embedding and empirically evaluating Jackson’s Problem Frames framework (PF) in requirements engineering practice.


Author(s):  
Alan W. Brown ◽  
David J. Carney ◽  
Edwin J. Morris ◽  
Dennis B. Smith ◽  
Paul F. Zarrella

Computer Aided Software Engineering (CASE) tools typically support individual users in the automation of a set of tasks within a software development process. Such tools have helped organizations in their efforts to develop better software within budget and time constraints. However, many organizations are failing to take full advantage of CASE technology as they struggle to make coordinated use of collections of tools, often obtained at different times from different vendors. This book provides an in-depth analysis of the CASE tool integration problem, and describes practical approaches that can be used with current CASE technology to help your organization take greater advantage of integrated CASE.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3480
Author(s):  
Walter Takashi Nakamura ◽  
Iftekhar Ahmed ◽  
David Redmiles ◽  
Edson Oliveira ◽  
David Fernandes ◽  
...  

The success of a software application is related to users’ willingness to keep using it. In this sense, evaluating User eXperience (UX) became an important part of the software development process. Researchers have been carrying out studies by employing various methods to evaluate the UX of software products. Some studies reported varied and even contradictory results when applying different UX evaluation methods, making it difficult for practitioners to identify which results to rely upon. However, these works did not evaluate the developers’ perspectives and their impacts on the decision process. Moreover, such studies focused on one-shot evaluations, which cannot assess whether the methods provide the same big picture of the experience (i.e., deteriorating, improving, or stable). This paper presents a longitudinal study in which 68 students evaluated the UX of an online judge system by employing AttrakDiff, UEQ, and Sentence Completion methods at three moments along a semester. This study reveals contrasting results between the methods, which affected developers’ decisions and interpretations. With this work, we intend to draw the HCI community’s attention to the contrast between different UX evaluation methods and the impact of their outcomes in the software development process.


Sign in / Sign up

Export Citation Format

Share Document