Refactoring Legacy Software for Layer Separation

Author(s):  
Alireza Khalilipour ◽  
Moharram Challenger ◽  
Mehmet Onat ◽  
Hale Gezgen ◽  
Geylani Kardas

One of the main aims in the layered software architecture is to divide the code into different layers so that each layer contains related modules and serves its upper layers. Although layered software architecture is matured now; many legacy information systems do not benefit from the advantages of this architecture and their code for the process/business and data access are mostly in a single layer. In many legacy systems, due to the integration of the code in one layer, changes to the software and its maintenance are mostly difficult. In addition, the big size of a single layer causes the load concentration and turns the server into a bottleneck where all requests must be executed on it. In order to eliminate these deficiencies, this paper presents a refactoring mechanism for the automatic separation of the business and data access layers by detecting the data access code based on a series of patterns in the input code and transferring it to a new layer. For this purpose, we introduce a code scanner which detects the target points based on these patterns and hence automatically makes the changes required for the layered architecture. According to the experimental evaluation results, the performance of the system is increased for the layer separated software using the proposed approach. Furthermore, it is examined that the application of the proposed approach provides additional benefits considering the qualitative criteria such as loosely coupling and tightly coherency.

Author(s):  
K. Velmurugan ◽  
M.A. Maluk Mohamed

One of the vital reasons for reverse engineering legacy software systems is to make it inter-operable. Moreover, technological advancements and changes in usability also motivate reverse engineering to exploit new features and incorporate them in legacy software systems. In this context, Web services are emerging and evolving as solutions for software systems for business applications in terms of facilitating interactions between business to business and business to customers. Web services are gaining significance due to inherent features like interoperability, simple implementation, and exploiting the boom in Internet infrastructure. Thus, this work proposes a framework based strategy using .net for effortless migration from legacy software systems to Web services. Further, this work also proposes that software metrics observed during the process of reverse engineering facilitate design of Web services from legacy systems.


Author(s):  
Leonid Stoimenov

Research in information systems interoperability is motivated by the ever-increasing heterogeneity of the computer world. New generations of applications, such as geographic information systems (GISs), have much more demands in comparison to possibilities of legacy information systems and traditional database technology. The popularity of GIS in governmental and municipality institutions induce increasing amounts of available information (Stoimenov, Ðordevic-Kajan, & Stojanovic, 2000). In a local community environment (city services, local offices, local telecom, public utilities, water and power supply services, etc.), different information systems deal with huge amounts of available information, where most data in databases are geo-referenced. GIS applications often have to process geo-data obtained from various geo-information communities. Also, information that exists in different spatial database may be useful for many other GIS applications. Numerous legacy systems should be coupled with GIS systems, which present additional difficulties in developing end-user applications.


2019 ◽  
Author(s):  
Walter Benitez-Davalos ◽  
Fabio López-Pires ◽  
David Cabañas ◽  
Yessica Bogado-Sarubbi

Adopting emerging computing paradigms such as cloud applications include challenges associated to transform legacy software to essential characteristics of the mentioned computing model, e.g. on-demand self-service, broad network access, resource pooling, rapid elasticity and measured services. This paper presents a summary of technical experiences on applying one of the most popular approaches to address transformation of legacy software to cloud-native applications: a microservice-oriented architecture with connections to legacy systems through anti-corruption layers. Several technical considerations are presented, focusing on Open Source Software to include particular features on modern development practices as well as solving issues related to the cloud-native transformation.


2006 ◽  
Vol 35 (3) ◽  
Author(s):  
Bronius Paradauskas ◽  
Aurimas Laurikaitis

This article discusses the process of enterprise knowledge extraction from relational database and source code of legacy information systems. Problems of legacy systems and main solutions for them are briefly described here. The uses of data reverse engineering and program understanding techniques to automatically infer as much as possible the schema and semantics of a legacy information system is analyzed. Eight step data reverse engineering algorithm for knowledge extraction from legacy systems is provided. A hypothetical example of knowledge extraction from legacy information system is presented.


Author(s):  
Cameron J. Turner ◽  
John M. MacDonald ◽  
Jane A. Lloyd

Ideally, quality is designed into software, just as quality is designed into hardware. However, when dealing with legacy systems, demonstrating that the software meets required quality standards may be difficult to achieve. Evolving customer needs, expressed by new operational requirements, resulted in the need to develop a legacy software quality assurance program at Los Alamos National Laboratory (LANL). This need led to the development of a reverse engineering approach referred to as software archaeology. This paper documents the software archaeology approaches used at LANL to demonstrate the software quality in legacy software systems. A case study for the Robotic Integrated Packaging System (RIPS) software is included to describe our approach.


Author(s):  
Chung-Yeung Pang

Maintaining and upgrading legacy systems is one of the challenges many enterprises face today. Despite their obsolescence, legacy systems continue to provide a competitive advantage by supporting unique business processes and acting as a repository for invaluable knowledge and historical data. However, enterprises would prefer to develop their applications with modern software technology instead of continuing to develop in the mainframe but leverage existing business processes and data from their legacy systems. This chapter presents an architectural framework and implementation methodology of a Central Intelligent Agent that is responsible for legacy integration. The framework uses an Enterprise Service Bus for service integration and agents to handle services. The Central Intelligent Agent uses a Prolog-style rule-based engine and context awareness for service handling and a complementary service agent on the mainframe side for legacy integration. The underlying framework provides a full set of functions to integrate legacy COBOL applications as services into the system without any programming effort in COBOL. The proposed technique enables fast prototyping and rapid development in an agile development process. It also facilitates legacy migration through successive and iterative processes.


2013 ◽  
Vol 411-414 ◽  
pp. 462-466 ◽  
Author(s):  
Yue Li ◽  
Wen Hua Zeng ◽  
Lv Qing Yang ◽  
Zhuang Liang Wu ◽  
Mei Hong Wang

This paper proposes a specific architecture of campus security management system based on WCF service of SOA and RFID technology, aiming at meeting the security and scalability requirement of the distributed environment of campus security management system. With the help of WCF service models, the underlying interaction among database, data access layer, RFID device information transaction and business logic processing are packaged seamlessly. Then the WCF service models provide platform-independent service to .Net MVC Framework Controller, which could be replaced by any type of caller, separating the presentation and implementation of this system. Because of the advantages of reducing the degree of coupling and greatly enhancing the security, scalability and reliability of the system, this architecture has been one of the recommended trends of software architecture.


Author(s):  
W. ERIC WONG ◽  
JENNY LI

Object-oriented languages support many modern programming concepts such as information hiding, inheritance, polymorphism, and dynamic binding. As a result, software systems implemented in OO languages are in general more reusable and reliable than others. Many legacy software systems, created before OO programming became popular, need to be redesigned and updated to OO programs. The process of abstracting OO designs from the procedural source code has often been done with limited assistance from program structural diagrams. Most reengineering focuses on the functionality of the original program, and the OO redesign often results in a completely new design based on the designers' understanding of the original program. Such an approach is not sufficient because it may take a significant amount of time and effort for designers to comprehend the original program. This paper presents a computer-aided semi-automatic method that abstracts OO designs from the original procedural source code. More specifically, it is a method for OO redesign based on program structural diagrams, visualization, and execution slices. We conducted a case study by applying this method to an inventory management software system. Results indicate that our method can effectively and efficiently abstract an appropriate OO design out of the original C code. In addition, some of the code from the original system can be automatically identified and reused in the new OO system.


2019 ◽  
Author(s):  
S Bauermeister ◽  
C Orton ◽  
S Thompson ◽  
R A Barker ◽  
J R Bauermeister ◽  
...  

AbstractThe Dementias Platform UK (DPUK) Data Portal is a data repository facilitating access to data for 3 370 929 individuals in 42 cohorts. The Data Portal is an end-to-end data management solution providing a secure, fully auditable, remote access environment for the analysis of cohort data. All projects utilising the data are by default collaborations with the cohort research teams generating the data.The Data Portal uses UK Secure eResearch Platform (UKSeRP) infrastructure to provide three core utilities: data discovery, access, and analysis. These are delivered using a 7 layered architecture comprising: data ingestion, data curation, platform interoperability, data discovery, access brokerage, data analysis and knowledge preservation. Automated, streamlined, and standardised procedures reduce the administrative burden for all stakeholders, particularly for requests involving multiple independent datasets, where a single request may be forwarded to multiple data controllers. Researchers are provided with their own secure ‘lab’ using VMware which is accessed using two factor authentication.Over the last 2 years, 160 project proposals involving 579 individual cohort data access requests were received. These were received from 268 applicants spanning 72 institutions (56 academic, 13 commercial, 3 government) in 16 countries with 84 requests involving multiple cohorts. Project are varied including multi-modal, machine learning, and Mendelian randomisation analyses. Data access is usually free at point of use although a small number of cohorts require a data access fee.


Sign in / Sign up

Export Citation Format

Share Document