Enterprise Applications

Author(s):  
Christine Choppy ◽  
Denis Hatebur ◽  
Maritta Heisel ◽  
Gianna Reggio

The authors provide a method to systematically develop enterprise application architectures from problem descriptions. From these descriptions, they derive two kinds of specifications: a behavioral specification describes how the automated business process is carried out. It can be expressed using activity or sequence diagrams. A structural specification describes the classes to be implemented and the operations they provide. The structural specification is created in three steps. All the diagrams are expressed in UML.

After a truly comprehensive discussion on a variety of integration patterns and technologies in preceding chapters, it becomes exceedingly clear that the integrated CPM, BPM, SOA, and service computing approach should be used to fully leverage disparate and distributed enterprise applications, global connectivity, and underlying enterprise application integration technologies in organizations. The conducted enterprise integration in an organization, thus, can deliver a performance-driven, business-oriented, and agile IT solution for the organization to strive for competitive advantages.


Author(s):  
Vincent Yen

In large organizations, typical systems portfolios consist of a mix of legacy systems, proprietary applications, databases, off-the-shelf packages, and client-server systems. Software systems integration is always an important issue and yet a very complex and difficult area in practice. Consider the software integration between two organizations on a supply chain; the level of complexity and difficulty multiply quickly. How to make heterogeneous systems work with each other within an enterprise or across the Internet is of paramount interest to businesses and industry. Web services technologies are being developed as the foundation of a new generation of business-to-business (B2B) and enterprise application integration (EAI) architectures, and important parts of components as grid (www.grid.org), wireless, and automatic computing (Kreger, 2003). Early technologies in achieving software application integration use standards such as the common object request broker architecture (CORBA) of the Object Management Group (www.omg.org), the distributed component object model (DCOM) of Microsoft, and Java/RMI, the remote method invocation mechanism. CORBA and DCOM are tightly coupled technologies, while Web services are not. Thus, CORBA and DCOM are more difficult to learn and implement than Web services. It is not surprising that the success of these standards is marginal (Chung, Lin, & Mathieu, 2003). The development and deployment of Web services requires no specific underlying technology platform. This is one of the attractive features of Web services. Other favorable views on the benefits of Web services include: a simple, lowcost EAI supporting the cross-platform sharing of functions and data; and an enabler of reducing integration complexity and time (Miller, 2003). To reach these benefits, however, Web services should meet many technology requirements and capabilities. Some of the requirements include (Zimmermann, Tomlinson & Peuser, 2003): • Automation Through Application Clients: It is required that arbitrary software applications running in different organizations have to directly communicate with each other. • Connectivity for Heterogeneous Worlds: Should be able to connect many different computing platforms. • Information and Process Sharing: Should be able to export and share both data and business processes between companies or business units. • Reuse and Flexibility: Existing application components can be easily integrated regardless of implementation details. • Dynamic Discovery of Services, Interfaces, and Implementations: It should be possible to let application clients dynamically, i.e., at runtime, look for and download service address, service binding, and service interface information. • Business Process Orchestration Without Programming: Allows orchestration of business activities into business processes, and executes such aggregated process automatically. The first five requirements are technology oriented. A solution to these requirements is XML-based Web services, or simply Web services. It employs Web standards of HTTP, URLs, and XML as the lingua franca for information and data encoding for platform independence; therefore it is far more flexible and adaptable than earlier approaches. The last requirement relates to the concept of business workflow and workflow management systems. In supply chain management for example, there is a purchase order process at the buyer’s side and a product fulfillment process at the supplier’s side. Each process represents a business workflow or a Web service if it is automated. These two Web services can be combined into one Web service that represents a new business process. The ability to compose new Web services from existing Web services is a powerful feature of Web services; however, it requires standards to support the composition process. This article will provide a simplified exposition of the underlying basic technologies, key standards, the role of business workflows and processes, and critical issues.


Author(s):  
Bahman Zamani ◽  
Shiva Rasoulzadeh

This article describes how experience in domain specific modeling can be captured and abstracted in a domain specific modeling language (DSML). Modeling with a DSML results in quality models. Patterns of enterprise application architecture (PofEAA) is a rich set of patterns that can be used by designers when designing (modeling) web-based enterprise applications. This article aims at defining a DSML based on PofEAA patterns, as well as providing tool support for designing web-based enterprise applications that use these patterns. The authors have built a DSML using the profile extension mechanism of UML, by defining stereotypes. In addition to the proposed profile, this article has implemented the structure and behavior of PofEAA patterns in Rational Software Architecture (RSA) which is resulted in a tool that facilitates the design of software for designers. To show the usefulness of the tool, it is used for modeling two small systems based on the PofEAA patterns. The results show that many of the design is automated and the modeling speed is increased.


2010 ◽  
pp. 929-936
Author(s):  
George Feuerlicht

Following the recent changes in the global business environment, many organizations are reevaluating their approach to delivering enterprise applications and are looking for more effective ways to control IT costs. There is growing evidence of reluctance to fund large-scale implementation projects, and of tighter budgets forcing more careful cost-benefit analysis to justify IT investments. It is becoming increasingly clear that the traditional model for delivering enterprise applications that involves the implementation of licensed software such as ERP (enterprise resource planning) applications within end-user organizations is not suited to the fast-evolving business world of the 21st century. Almost invariably, situations in which organizations own and maintain their entire IT infrastructure lead to very high costs of ownership, and consequently high levels of IT spending, which can detract from the core business in which the organization is engaged. This has led to a situation in which some businesses doubt the benefits of IT (Carr, 2003), and some observers even contend that productivity improvements, once assumed to be the result of IT, are more likely to be the results of other factors such as longer working hours (Nevens, 2002). This backlash that followed the IT boom at the end of the last century has forced software vendors to seek more cost-effective models for the delivery of enterprise applications, and has led to the reemergence of the ASP (application service provider) model as an alternative to licensed software. Today, the ASP model (or software-as-a-service model) is a part of a more general trend toward utility computing, where the service provider delivers highly scalable application services to a large population of end-user organizations in a reliable and cost-effective manner, typically from a remote data center. Utility computing aims to supply application services on demand, similar to other utility services (e.g., gas or electricity), and relies on new technologies and architectures that enable the virtualization and sharing of resources across a large number of users in order to minimize costs and maximize utilization. The use of advanced service-oriented architectures (SOAs), grid computing, cluster technologies, and failure-resistant configurations enable the delivery of highly scalable application services in a reliable manner to a large population of users. These technological advances distinguish utility computing from the earlier ASP and outsourcing models, and will ultimately result in significant reduction in the costs of enterprise software solutions and wide adoption of the software-as-a-service model. Major IT vendors including IBM, Microsoft, Sun, Oracle, and HP are promoting utility computing, albeit under different names (e.g., on-demand computing, etc.), and are investing vast resources into the construction of data centers and related facilities (Abbas, 2003). Others, such as Salesforce.com, have been successful with providing hosted services for CRM (customer-relationship management) and other related types of applications, validating the ASP model and further confirming the trend toward utility computing. As the enterprise application software market matures, major ERP vendors are changing their revenue model to decrease their reliance on new software licenses toward income generated from software-license upgrades and product support (Karpecki, 2004; Levy, 2004). This change combined with the fact that most organizations spend as much as 80% of software-related costs on software maintenance and related activities (Haber, 2004) creates a situation in which licensed software is de facto rented. It is precisely this high level of ongoing costs that motivate many organizations toward alternatives such as outsourcing and the ASP model. In this article we first examine the business drivers for the ASP model and contrast the software-as-a-service model with the traditional software-as-a-license approach. We then discuss future enterprise computing trends, focusing on the reemergence of the ASP model for enterprise applications and the likely impact of the wide adoption of this model on the IT landscape. In conclusion, we summarize the main arguments in this article.


Author(s):  
Anushree Sah ◽  
Shuchi Juyal Bhadula ◽  
Ankur Dumka ◽  
Saurabh Rawat

Enterprise applications are the DNA of any organization, and they hold the business logic, handle large amount of data, support multiprogramming, are easily maintainable, scalable, have high performance and are able to choreograph or orchestrate modules, and are fortified from attacks and vulnerabilities. These enterprise applications are the backbone of any organization and enhance the productivity and efficiency of the organization to a greater extent, thus ensuring the continuity in the business. So, after seeing the need and development of enterprise application, in this chapter, the authors present the idea of developing and discussing enterprise applications.


Author(s):  
Timon C. Du ◽  
Eldon Y. Li

Business process management systems such as the workflow management system and the enterprise application integration system manage process flow on a minute-by-minute basis in various application domains. In the conventional approach, the business process must be predefined before it is implemented. However, involving business users in the early stage of the design phase is neither efficient nor realistic in the dynamic business world. This study proposes a framework to implement a dynamic business process in the P2P Semantic Web, which provides the flexibility to dynamically alter business process and to take semantic data into consideration. The system is demonstrated by a case of a manufacturer that is processing an order.


2014 ◽  
pp. 1927-1955
Author(s):  
Indika Kumara ◽  
Chandana Gamage

The commonality across software systems can be exploited to develop multiple heterogeneous systems successfully without undue cost, time, and effort. The systematic reuse across different systems is of paramount importance. With a well-planned reuse approach, a vendor can offer individualized products, which are products tailored to meet the requirements of a particular user effectively, as well as the products constructed to deliver solutions for a greater variety of application domains such as enterprise application integration and business process management. This chapter describes the development of software systems having different architectures reusing most of the implementations of the required functionalities as-is. It presents a systematic process for crafting multi-architecture reusable components and for using those components in formulating software systems. Furthermore, the chapter highlights the significance of the strategic reuse across systems in three contemporary research spheres.


2002 ◽  
Vol 17 (1) ◽  
pp. 81-85 ◽  
Author(s):  
YANNIS LABROU

Academic work on agents and ontologies is often oblivious to the complexities and realities of enterprise computing. At the same time, the practitioners of enterprise computing, although they are adept at the building of robust, real-life enterprise applications, are unaware of the academic body of work and the opportunities of applying novel approaches of academic origin. Enterprise applications are very complex systems that are designed to support critical business operations. This article outlines the technical and business foundations of enterprise application software and briefly discusses viable opportunities for agents and ontology research.


Sign in / Sign up

Export Citation Format

Share Document