E-Commerce Services Based on Mobile Agents

2009 ◽  
pp. 1226-1236
Author(s):  
Giancarlo Fortino ◽  
Alfredo Garro ◽  
Wilma Russo

The Internet offers a unique opportunity for e-commerce to take central stage in the rapidly growing online economy. With the advent of the Web, the first generation of business-to-consumer (B2C) applications was developed and deployed. Classical examples include virtual shops, on-demand delivery of contents, and e-travel agency. Another facet of e-commerce is represented by business-to-business (B2B), which can have even more dramatic economic implications since it far exceeds B2C in both the volume of transactions and rate of growth. Examples of B2B applications include procurement, customer relationship management (CRM), billing, accounting, human resources, supply chain, and manufacturing (Medjahed, Benatallah, Bouguettaya, Ngu, & Elmagarmid, 2003). Although the currently available Web-based and object-oriented technologies are well-suited for developing and supporting e-commerce services, new infrastructures are needed to achieve a higher degree of intelligence and automation of e-commerce services. Such a new generation of e-commerce services can be effectively developed and provided by combining the emerging agent paradigm and technology with new Web-based standards such as ebXML (2005). Agents have already been demonstrated to retain the potential for fully supporting the development lifecycle of large-scale software systems which require complex interactions between autonomous distributed components (Luck, McBurney, & Preist, 2004). In particular, e-commerce has been one of the traditional arenas for agent technology (Sierra & Dignum, 2001). Agent-mediated e-commerce (AMEC) is concerned with providing agent-based solutions which support different stages of the trading processes in e-commerce, including needs identification, product brokering, merchant brokering, contract negotiation and agreement, payment and delivery, and service and evaluation. In addition, the mobility characteristic of peculiar agents (a.k.a. mobile agents), which allows them to move across the nodes of a networked environment, can further extend the support offered by the agents by featuring advanced e-commerce solutions such as location-aware shopping, mobile and networked comparison shopping, mobile auction bidding, and mobile contract negotiation (Kowalczyk, Ulieru, & Unland, 2003; Maes, Guttman, & Moukas, 1999).

Author(s):  
Giancarlo Fortino ◽  
Alfredo Garro ◽  
Wilma Russo

The Internet offers a unique opportunity for e-commerce to take central stage in the rapidly growing online economy. With the advent of the Web, the first generation of business-to-consumer (B2C) applications was developed and deployed. Classical examples include virtual shops, on-demand delivery of contents, and e-travel agency. Another facet of e-commerce is represented by business-to-business (B2B), which can have even more dramatic economic implications since it far exceeds B2C in both the volume of transactions and rate of growth. Examples of B2B applications include procurement, customer relationship management (CRM), billing, accounting, human resources, supply chain, and manufacturing (Medjahed, Benatallah, Bouguettaya, Ngu, & Elmagarmid, 2003). Although the currently available Web-based and object-oriented technologies are well-suited for developing and supporting e-commerce services, new infrastructures are needed to achieve a higher degree of intelligence and automation of e-commerce services. Such a new generation of e-commerce services can be effectively developed and provided by combining the emerging agent paradigm and technology with new Web-based standards such as ebXML (2005). Agents have already been demonstrated to retain the potential for fully supporting the development lifecycle of large-scale software systems which require complex interactions between autonomous distributed components (Luck, McBurney, & Preist, 2004). In particular, e-commerce has been one of the traditional arenas for agent technology (Sierra & Dignum, 2001). Agent-mediated e-commerce (AMEC) is concerned with providing agent-based solutions which support different stages of the trading processes in e-commerce, including needs identification, product brokering, merchant brokering, contract negotiation and agreement, payment and delivery, and service and evaluation. In addition, the mobility characteristic of peculiar agents (a.k.a. mobile agents), which allows them to move across the nodes of a networked environment, can further extend the support offered by the agents by featuring advanced e-commerce solutions such as location-aware shopping, mobile and networked comparison shopping, mobile auction bidding, and mobile contract negotiation (Kowalczyk, Ulieru, & Unland, 2003; Maes, Guttman, & Moukas, 1999). To date, several agent- and mobile agent-based e-commerce applications and systems have been developed which allow for the creation of complex e-marketplaces—that is, e-commerce environments which offer buyers and sellers new channels and business models for trading goods and services over the Internet. However, the growing complexity of agent-based marketplaces demands for proper methodologies and tools supporting the validation, evaluation, and comparison of: (1) models, mechanisms, policies, and protocols of the agents involved in such e-marketplaces; and (2) aspects concerned with the overall complex dynamics of the e-marketplaces. The use of such methodologies and tools can actually provide the twofold advantage of: 1. analyzing existing e-marketplaces to identify the best reusable solutions and/or identify hidden pitfalls for reverse engineering purposes; and 2. analyzing new models of e-marketplaces before their actual implementation and deployment to identify, a priori, the best solutions, thus saving reverse engineering efforts. This article presents an overview of an approach to the modeling and analysis of agent-based e-marketplaces (Fortino, Garro, & Russo, 2004a, 2005). The approach centers on a Statecharts-based development process for agent-based applications and systems (Fortino, Russo, & Zimeo, 2004b) and on a discrete event simulation framework for mobile and multi-agent systems (MAS) (Fortino et al, 2004a). A case study modeling and analyzing a real consumer-driven e-commerce service system based on mobile agents within an agent-based e-marketplace on the Internet (Bredin, Kotz, & Rus, 1998; Wang, Tan, & Ren, 2002) is also described to demonstrate the effectiveness of the proposed approach.


2021 ◽  
Vol 5 (4) ◽  
pp. 371
Author(s):  
Erwin Wicaksono ◽  
Fauziah Fauziah ◽  
Deny Hidayatullah

The purpose of this study is to build soft devices for electronic customer relationship management (e-CRM) with the Framework of Dynamic method which will facilitate customer relationship management that can help relationships between stores and customers so that customers can enjoy and feel comfortable with the store services that can eventually be formed. In implementing this software, the author uses the system development lifecycle (SDLC) method then produces a web-based e-CRM prototype with PHP programming languages and MySQL DBMS. This e-CRM prototype has been tested in terms of verification, validation, and prototype testing. To design this system Use Cases, ERD, LRS, Class Diagrams, and Sequence Diagrams are used. From the test results, it can be seen that the e-CRM prototype has been successful and is in accordance with the planning objectives. The result of this Marketplace Design is as a forum to make it easier for shop owners in the field of Building Materials and the like to market the products they have here.Keywords:e-CRM, Framework of Dynamic CRM, Customer, System Development Life Cycle.


Author(s):  
Mehmet Kaya ◽  
James W. Fawcett

Software development is a continuous process that usually starts with analyzing the system requirements and proceeds with design, implementation, testing, and maintenance. Regardless of how good an initial design was achieved, quality of source code tends to decay throughout the development process as software evolves. One of the main contributing factors to this degradation of initial quality can be considered as maintenance operations, for instance to enhance performance or other attributes of the system or to fix newly discovered bugs. For such large software systems, development process also requires reusing existing components which may have been implemented by others. Hence, a comprehensible piece of source code, e.g. one that conveys its message about what it is trying to do easily with understandable and modular implementation, significantly reduces time and effort not only for the implementation phase of the development lifecycle; but also for testing and maintenance phases. In other words, while software decay is inevitable, software comprehension plays a determining role in the total cost and effectiveness of both implementation phase and maintenance phase. Therefore, developers should strive to create software components with modular structure and clearer implementation to reduce the development cost. In this paper, we are interested in finding ways to successfully decompose long methods (those with poor initial implementation and/or decayed overtime) into smaller, more comprehensible and readable ones. This decomposition process is known as extract method refactoring and helps to reduce the overall cost of development. Most of the existing refactoring tools require users to select the code fragments that need to be extracted. We introduce a novel technique for this refactoring. This technique seeks refactoring opportunities based on variable declarations and uses confining fully extractable code regions without any user intervention. We implemented this technique as an analysis and visualization tool to help a user identify candidate code fragments to be extracted as separate methods. With this automation tool, developers do not have to manually inspect a foreign code base to select code fragments for refactoring. Through the visual representations we provide, one can observe all suggested refactoring effectively on large scale software systems and decide whether a particular refactoring should be applied. To show the effectiveness of our techniques, we also provide some case studies conducted using this tool and technique both on our own project’s source code and other open-source projects.


2016 ◽  
Vol 44 (4) ◽  
pp. 975-992
Author(s):  
Adrian S. Wisnicki

Digital Victorian studies, as thefield might be called, has entered a new generation of endeavor. Of course, many older digital Victorian projects remain online and continue to be important resources for scholars working in a variety of areas. In the pantheon of the older projects we might include: The Victorian Web (Landow; 1987–2012), a long-standing project that presents an array of images and texts linked to the Victorian era as nodes in a complex network; the Rossetti Archive (McGann; 1993–2008), a comprehensive digital collection of Dante Gabriel Rossetti's poetry, prose, and visual art as well as diverse contextual materials; NINES (2003-present), a nineteenth-century digital resource aggregator that facilitates integrated searching across a variety of sites and that provides peer review for relevant scholarly projects; the Old Bailey Online (Hitchcock; 2003–15), a large-scale venture that, among other things, makes available digital images and fully searchable, structured text of the 190,000 pages that constitute the Old Bailey Proceedings; and Nineteenth-Century Serials Edition (ncse) (Brake; 2005–08), a rigorous edition of six nineteenth-century periodicals and newspapers that explores the issue of modeling nineteenth-century serials in digital form. Many other such projects might also be added. However, the rapid advance of web-based technologies has recently propelled the development of digital Victorian studies in multiple directions at once. The concurrent rise of digital humanities has also ensured that Victorian scholars now have ever more exciting options for creating and analyzing digital Victorian materials and ever more sophisticated questions for interrogating the process by which those materials are created.


2013 ◽  
Author(s):  
Laura S. Hamilton ◽  
Stephen P. Klein ◽  
William Lorie

2020 ◽  
Vol 59 (04) ◽  
pp. 294-299 ◽  
Author(s):  
Lutz S. Freudenberg ◽  
Ulf Dittmer ◽  
Ken Herrmann

Abstract Introduction Preparations of health systems to accommodate large number of severely ill COVID-19 patients in March/April 2020 has a significant impact on nuclear medicine departments. Materials and Methods A web-based questionnaire was designed to differentiate the impact of the pandemic on inpatient and outpatient nuclear medicine operations and on public versus private health systems, respectively. Questions were addressing the following issues: impact on nuclear medicine diagnostics and therapy, use of recommendations, personal protective equipment, and organizational adaptations. The survey was available for 6 days and closed on April 20, 2020. Results 113 complete responses were recorded. Nearly all participants (97 %) report a decline of nuclear medicine diagnostic procedures. The mean reduction in the last three weeks for PET/CT, scintigraphies of bone, myocardium, lung thyroid, sentinel lymph-node are –14.4 %, –47.2 %, –47.5 %, –40.7 %, –58.4 %, and –25.2 % respectively. Furthermore, 76 % of the participants report a reduction in therapies especially for benign thyroid disease (-41.8 %) and radiosynoviorthesis (–53.8 %) while tumor therapies remained mainly stable. 48 % of the participants report a shortage of personal protective equipment. Conclusions Nuclear medicine services are notably reduced 3 weeks after the SARS-CoV-2 pandemic reached Germany, Austria and Switzerland on a large scale. We must be aware that the current crisis will also have a significant economic impact on the healthcare system. As the survey cannot adapt to daily dynamic changes in priorities, it serves as a first snapshot requiring follow-up studies and comparisons with other countries and regions.


2012 ◽  
Vol 2 (2) ◽  
pp. 112-116
Author(s):  
Shikha Bhatia ◽  
Mr. Harshpreet Singh

With the mounting demand of web applications, a number of issues allied to its quality have came in existence. In the meadow of web applications, it is very thorny to develop high quality web applications. A design pattern is a general repeatable solution to a generally stirring problem in software design. It should be noted that design pattern is not a finished product that can be directly transformed into source code. Rather design pattern is a depiction or template that describes how to find solution of a problem that can be used in many different situations. Past research has shown that design patterns greatly improved the execution speed of a software application. Design pattern are classified as creational design patterns, structural design pattern, behavioral design pattern, etc. MVC design pattern is very productive for architecting interactive software systems and web applications. This design pattern is partition-independent, because it is expressed in terms of an interactive application running in a single address space. We will design and analyze an algorithm by using MVC approach to improve the performance of web based application. The objective of our study will be to reduce one of the major object oriented features i.e. coupling between model and view segments of web based application. The implementation for the same will be done in by using .NET framework.


2020 ◽  
Author(s):  
Lungwani Muungo

The purpose of this review is to evaluate progress inmolecular epidemiology over the past 24 years in canceretiology and prevention to draw lessons for futureresearch incorporating the new generation of biomarkers.Molecular epidemiology was introduced inthe study of cancer in the early 1980s, with theexpectation that it would help overcome some majorlimitations of epidemiology and facilitate cancerprevention. The expectation was that biomarkerswould improve exposure assessment, document earlychanges preceding disease, and identify subgroupsin the population with greater susceptibility to cancer,thereby increasing the ability of epidemiologic studiesto identify causes and elucidate mechanisms incarcinogenesis. The first generation of biomarkers hasindeed contributed to our understanding of riskandsusceptibility related largely to genotoxic carcinogens.Consequently, interventions and policy changes havebeen mounted to reduce riskfrom several importantenvironmental carcinogens. Several new and promisingbiomarkers are now becoming available for epidemiologicstudies, thanks to the development of highthroughputtechnologies and theoretical advances inbiology. These include toxicogenomics, alterations ingene methylation and gene expression, proteomics, andmetabonomics, which allow large-scale studies, includingdiscovery-oriented as well as hypothesis-testinginvestigations. However, most of these newer biomarkershave not been adequately validated, and theirrole in the causal paradigm is not clear. There is a needfor their systematic validation using principles andcriteria established over the past several decades inmolecular cancer epidemiology.


Sign in / Sign up

Export Citation Format

Share Document