Advances in Machine Learning Applications in Software Engineering
Latest Publications


TOTAL DOCUMENTS

31
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

Published By IGI Global

9781591409410, 9781591409434

Author(s):  
Shangping Ren ◽  
Jeffrey J.P. Tsai ◽  
Ophir Frieder

In this chapter, we present the role-based context constrained access control (RBCC) model. The model integrates contextual constraints specified in first-order logic with the standard role-based access control (RBAC). In the RBCC access control model, the permission assignment functions are constrained by the user’s current accessing contexts. The accessing contests are further categorized in two classes, that is, system contexts and application contexts. System contexts may contain accessing time, accessing location, and other security-related system information; while application contexts are abstractions of relationships among different types of entities (i.e., subjects, roles, and objects) as well as implicit relationships derived from protected information content and external information. The ability to integrate contextual information allows the RBCC model to be flexible and capable of specifying a variety of complex access policies and providing tight and just-intime permission activations. A set of medical domain examples will be used to demonstrate the expressiveness of the RBCC model.


Author(s):  
Daniele Gunetti

Though inductive logic programming (ILP for short) should mean the “induction of logic programs”, most research and applications of this area are only loosely related to logic programming. In fact, the automatic synthesis of “true” logic programs is a difficult task, since it cannot be done without a lot of information on the sought programs, and without the ability to describe in a simple way well-restricted searching spaces. In this chapter, we argue that, if such knowledge is available, inductive logic programming can be used as a valid tool for software engineering, and we propose an integrated framework for the development, maintenance, reuse, testing, and debugging of logic programs.


Author(s):  
Gary D. Boetticher

Given a choice, software project managers frequently prefer traditional methods of making decisions rather than relying on empirical software engineering (empirical/machine learning- based models). One reason for this choice is the perceived lack of credibility associated with these models. To promote better empirical software engineering, a series of experiments are conducted on various NASA datasets to demonstrate the importance of assessing the ease/difficulty of a modeling situation. Each dataset is divided into three groups, a training set, and “nice/nasty” neighbor test sets. Using a nearest neighbor approach, “nice neighbors” align closest to same class training instances. “Nasty neighbors” align to the opposite class training instances. The “nice”, “nasty” experiments average 94% and 20%accuracy, respectively. Another set of experiments show how a ten-fold cross-validation is not sufficient in characterizing a dataset. Finally, a set of metric equations is proposed for improving the credibility assessment of empirical/machine learning models.


Author(s):  
Paul Dietz ◽  
Aswin van den Berg ◽  
Kevin Marth ◽  
Thomas Weigert ◽  
Frank Weil

Model-driven engineering proposes to develop software systems by first creating an executable model of the system design and then transforming this model into an implementation. This chapter discusses how to automatically transform such design models into product implementations for industrial-strength systems. It provides insights, practical considerations, and lessons learned when developing code generators for applications that must conform to the constraints imposed by real-world, high-performance systems. This deeper understanding of the relevant issues will enable developers of automatic code generation systems to build transformation tools that can be deployed in industrial applications with stringent performance requirements.


Author(s):  
Yi Liu ◽  
Taghi M. Khoshgoftaar

A software quality estimation model is an important tool for a given software quality assurance initiative. Software quality classification models can be used to indicate which program modules are fault-prone (FP) and not fault-prone (NFP). Such models assume that enough resources are available for quality improvement of all the modules predicted as FP. In conjunction with a software quality classification model, a quality-based ranking of program modules has practical benefits since priority can be given to modules that are more FP. However, such a ranking cannot be achieved by traditional classification techniques. We present a novel software quality classification model based on multi-objective optimization with genetic programming (GP). More specifically, the GP-based model provides both a classification (FP or NFP) and a quality-based ranking for the program modules. The quality factor used to rank the modules is typically the number of faults or defects associated with a module. Genetic programming is ideally suited for optimizing multiple criteria simultaneously. In our study, three performance criteria are used to evolve a GP-based software quality model: classification performance, module ranking, and size of the GP tree. The third criterion addresses a commonly observed phenomena in GP,that is, bloating. The proposed model is investigated with case studies of software measurement data obtained from two industrial software systems.


Author(s):  
Witold Pedrycz ◽  
Giancarlo Succi

The learning abilities and high transparency are the two important and highly desirable features of any model of software quality. The transparency and user-centricity of quantitative models of software engineering are of paramount relevancy as they help us gain a better and more comprehensive insight into the revealed relationships characteristic to software quality and software processes. In this study, we are concerned with logic-driven architectures of logic models based on fuzzy multiplexers (fMUXs). Those constructs exhibit a clear and modular topology whose interpretation gives rise to a collection of straightforward logic expressions. The design of the logic models is based on the genetic optimization and genetic algorithms, in particular. Through the prudent usage of this optimization framework, we address the issues of structural and parametric optimization of the logic models. Experimental studies exploit software data that relates software metrics (measures) to the number of modifications made to software modules.


Author(s):  
I-Ling Yen ◽  
Tong Gao

Reconfigurability is an important requirement in many application systems. Many approaches have been proposed to achieve static/dynamic reconfigurability. Service-oriented architecture offers a certain degree of reconfigurability due to its support in dynamic composition. When system requirements change, new composition of services can be determined to satisfy the new requirements. However, analysis, especially QoS based analysis, is generally required to make appropriate service selections and service configurations. In this chapter, we discuss the development of QoS-based composition analysis techniques and propose a QoS specification model. The specification model facilitates QoS-based specification of the properties of the Web services and the requirements of the application systems. The composition analysis techniques can be used to analyze QoS tradeoffs and determine the best selections and configurations of the Web services. We develop a composition analysis framework and use the genetic algorithm in the framework for composition decision making. The framework currently supports SOA performance analysis. The details of the genetic algorithm for the framework and the performance analysis techniques are discussed in this chapter.


Author(s):  
Min Chen ◽  
Shu-Ching Chen

This chapter introduces an advanced content-based image retrieval (CBIR) system, MMIR, where Markov model mediator (MMM) and multiple instance learning (MIL) techniques are integrated seamlessly and act coherently as a hierarchical learning engine to boost both the retrieval accuracy and efficiency. It is well-understood that the major bottleneck of CBIR systems is the large semantic gap between the low-level image features and the highlevel semantic concepts. In addition, the perception subjectivity problem also challenges a CBIR system. To address these issues and challenges, the proposed MMIR system utilizes the MMM mechanism to direct the focus on the image level analysis together with the MIL technique (with the neural network technique as its core) to real-time capture and learn the object-level semantic concepts with some help of the user feedbacks. In addition, from a long-term learning perspective, the user feedback logs are explored by MMM to speed up the learning process and to increase the retrieval accuracy for a query. The comparative studies on a large set of real-world images demonstrate the promising performance of our proposed MMIR system.


Author(s):  
Marek Reformat ◽  
Petr Musilek ◽  
Efe Igbide

Amount of software engineering data gathered by software companies amplifies importance of tools and techniques dedicated to processing and analysis of data. More and more methods are being developed to extract knowledge from data and build data models. In such cases, selection of the most suitable data processing methods and quality of extracted knowledge is of great importance. Software maintenance is one of the most time and effort-consuming tasks among all phases of a software life cycle. Maintenance managers and personnel look for methods and tools supporting analysis of software maintenance data in order to gain knowledge needed to prepare better plans and schedules of software maintenance activities. Software engineering data models should provide quantitative as well as qualitative outputs. It is desirable to build these models based on a well-delineated logic structure. Such models would enhance maintainers’ understanding of factors which influence maintenance efforts. This chapter focuses on defect-related activities that are the core of corrective maintenance. Two aspects of these activities are considered: a number of software components that have to be examined during a defect removing process, and time needed to remove a single defect. Analysis of the available datasets leads to development of data models, extraction of IF-THEN rules from these models, and construction of ensemble-based prediction systems that are built based on these data models. The data models are developed using well-known tools such as See5/C5.0 and 4cRuleBuilder, and a new multi-level evolutionary-based algorithm. Single data models are put together into ensemble prediction systems that use elements of evidence theory for the purpose of inference about a degree of belief in the final prediction.


Author(s):  
Baowen Xu ◽  
Xiaoyuan Xie ◽  
Liang Shi ◽  
Changhai Nie

Genetic algorithms are a kind of global meta-heuristic search technique that searches intelligently for optimal solutions to a problem. Evolutionary testing is a promise testing technique, which utilises genetic algorithms to generate test data for various testing objectives. It has been researched and applied in many testing areas, including structural testing, temporal performance testing, safety testing, specification-based testing, and so forth. Experimental studies have shown that compared with the traditional techniques, evolutionary testing can greatly improve the testing efficiency.


Sign in / Sign up

Export Citation Format

Share Document