software evolution
Recently Published Documents


TOTAL DOCUMENTS

925
(FIVE YEARS 90)

H-INDEX

39
(FIVE YEARS 2)

2022 ◽  
pp. 84-106
Author(s):  
Munish Saini ◽  
Kuljit Kaur Chahal

Many studies have been conducted to understand the evolution process of Open Source Software (OSS). The researchers have used various techniques for understanding the OSS evolution process from different perspectives. This chapter reports a meta-data analysis of the systematic literature review on the topic in order to understand its current state and to identify opportunities for the future. This research identified 190 studies, selected against a set of questions, for discussion. It categorizes the research studies into nine categories. Based on the results obtained from the systematic review, there is evidence of a shift in the metrics and methods for OSS evolution analysis over the period of time. The results suggest that there is a lack of a uniform approach to analyzing and interpreting the results. There is need of more empirical work using a standard set of techniques and attributes to verify the phenomenon governing the OSS projects. This will help to advance the field and establish a theory of software evolution.


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Daniel R.F. Apolinário ◽  
Breno B.N. de França

AbstractThe microservice architecture is claimed to satisfy ongoing software development demands, such as resilience, flexibility, and velocity. However, developing applications based on microservices also brings some drawbacks, such as the increased software operational complexity. Recent studies have also pointed out the lack of methods to prevent problems related to the maintainability of these solutions. Disregarding established design principles during the software evolution may lead to the so-called architectural erosion, which can end up in a condition of unfeasible maintenance. As microservices can be considered a new architecture style, there are few initiatives to monitoring the evolution of software microservice-based architectures. In this paper, we introduce the SYMBIOTE method for monitoring the coupling evolution of microservice-based systems. More specifically, this method collects coupling metrics during runtime (staging or production environments) and monitors them throughout software evolution. The longitudinal analysis of the collected measures allows detecting an upward trend in coupling metrics that could represent signs of architectural degradation. To develop the proposed method, we performed an experimental analysis of the coupling metrics behavior using artificially generated data. The results of these experiment revealed the metrics behavior in different scenarios, providing insights to develop the analysis method for the identification of architectural degradation. We evaluated the SYMBIOTE method in a real-case open source project called Spinnaker. The results obtained in this evaluation show the relationship between architectural changes and upward trends in coupling metrics for most of the analyzed release intervals. Therefore, the first version of SYMBIOTE has shown potential to detect signs of architectural degradation during the evolution of microservice-based architectures.


2021 ◽  
Vol 11 (19) ◽  
pp. 9286
Author(s):  
Seonah Lee ◽  
Jaejun Lee ◽  
Sungwon Kang ◽  
Jongsun Ahn ◽  
Heetae Cho

When performing software evolution tasks, developers spend a significant amount of time looking for files to modify. By recommending files to modify, a code edit recommendation system reduces the developer’s navigation time when conducting software evolution tasks. In this paper, we propose a code edit recommendation method using a recurrent neural network (CERNN). CERNN forms contexts that maintain the sequence of developers’ interactions to recommend files to edit and stops recommendations when the first recommendation becomes incorrect for the given evolution task. We evaluated our method by comparing it with the state-of-the-art method MI-EA that was developed based on the association rule mining technique. The result shows that our proposed method improves the average recommendation accuracy by approximately 5% over MI-EA (0.64 vs. 0.59 F-score).


2021 ◽  
Vol 26 (6) ◽  
Author(s):  
Christoph Laaber ◽  
Harald C. Gall ◽  
Philipp Leitner

AbstractRegression testing comprises techniques which are applied during software evolution to uncover faults effectively and efficiently. While regression testing is widely studied for functional tests, performance regression testing, e.g., with software microbenchmarks, is hardly investigated. Applying test case prioritization (TCP), a regression testing technique, to software microbenchmarks may help capturing large performance regressions sooner upon new versions. This may especially be beneficial for microbenchmark suites, because they take considerably longer to execute than unit test suites. However, it is unclear whether traditional unit testing TCP techniques work equally well for software microbenchmarks. In this paper, we empirically study coverage-based TCP techniques, employing total and additional greedy strategies, applied to software microbenchmarks along multiple parameterization dimensions, leading to 54 unique technique instantiations. We find that TCP techniques have a mean APFD-P (average percentage of fault-detection on performance) effectiveness between 0.54 and 0.71 and are able to capture the three largest performance changes after executing 29% to 66% of the whole microbenchmark suite. Our efficiency analysis reveals that the runtime overhead of TCP varies considerably depending on the exact parameterization. The most effective technique has an overhead of 11% of the total microbenchmark suite execution time, making TCP a viable option for performance regression testing. The results demonstrate that the total strategy is superior to the additional strategy. Finally, dynamic-coverage techniques should be favored over static-coverage techniques due to their acceptable analysis overhead; however, in settings where the time for prioritzation is limited, static-coverage techniques provide an attractive alternative.


2021 ◽  
Author(s):  
Willie Lawrence ◽  
Eiji Adachi

During the evolution of a database schema, some schema-changing operations (e.g., the “ALTER TABLE” command) require the underlying database management system to lock tables until the opera-tion is finished. We call these schema-changing operations blocking operations. During the execution of blocking operations, a soft-ware application may behave abnormally, varying from a slow page loading to an error caused by a web request taking too long to return. Despite their potential negative impact on important qual-ity attributes, blocking operations have not yet been empirically investigated in the context of software evolution. To fill this gap, we conducted a large industrial case study in the context of a Brazilian software company. We analyzed 1,499 atomic schema-changing operations from a period of 6 years to explore which blocking operations the developers frequently performed during the evolution of the database schema of a target system. The intention behind this case study is better understanding the problem in its original context to outline strategies to correct or mitigate it in the future. Our results show that blocking operations were very common, though not all of them seemed to cause observable downtime periods. We also present some mitigating strategies already in use by the devel-opment team of the target system to cope with blocking operation during software evolution, avoiding their negative impact.


2021 ◽  
Author(s):  
Cezary Boldak ◽  
Stanislaw Jarzabek ◽  
Junling Seow

Software evolution relies on storing component versions along with delta-changes in a repository of a version control tool such a centralized CVS in old days, or decentralized Git today. Code implementing various software features (e.g., requirements) often spreads over multiple software components, and across multiple versions of those components. Not having a clear picture of feature implementation and evolution may hinder software reuse which most often is concerned with feature reuse across system releases, and components are just means to that end. Much research on feature location shows how important and difficult is to find feature-related code buried in program components post mortem. We propose to avoid creating the problem in the first place, by explicating feature-related code in component versions at the time of their implementation. To do that, we complement traditional version control approach with generative mechanisms. We describe salient features of such an approach realized in ART (Adaptive Reuse Technology, http://art-processor.org), and explain its role in easing comprehending software evolution and feature reuse. Advanced commercial version control tools make a step towards easing the evolution problems addressed in this paper. Our approach is an alternative way of addressing the same problem on quite a different ground.


2021 ◽  
Vol 2024 (1) ◽  
pp. 012067
Author(s):  
YanXia Zhu ◽  
LinHui Zhong ◽  
LiJuan Fu ◽  
ShuHe Ruan ◽  
Jing Xu ◽  
...  

2021 ◽  
pp. 226-239
Author(s):  
Anna Grimán Padua ◽  
Manuel Capel Tuñón ◽  
Eladio Garví

Sign in / Sign up

Export Citation Format

Share Document