Breakthroughs in Software Science and Computational Intelligence
Latest Publications


TOTAL DOCUMENTS

25
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By IGI Global

9781466602649, 9781466602656

Author(s):  
Shaohua Teng ◽  
Wei Zhang ◽  
Haibin Zhu ◽  
Xiufen Fu ◽  
Jiangyi Su ◽  
...  

The LLF (Least Laxity First) scheduling algorithm assigns a priority to a task according to its executing urgency. The smaller the laxity value of a task is, the sooner it needs to be executed. When two or more tasks have same or approximate laxity values, LLF scheduling algorithm leads to frequent switches among tasks, causes extra overhead in a system, and therefore, restricts its application. The least switch and laxity first scheduling algorithm is proposed in this paper by searching out an appropriate common divisor in order to improve the LLF algorithm for periodic tasks.


Author(s):  
Yingxu Wang ◽  
Xinming Tan ◽  
Cyprian F. Ngolah ◽  
Philip Sheu

Type theories are fundamental for underpinning data object modeling and system architectural design in computing and software engineering. Abstract Data Types (ADTs) are a set of highly generic and rigorously modeled data structures in type theory. ADTs also play a key role in Object-Oriented (OO) technologies for software system design and implementation. This paper presents a formal modeling methodology for ADTs using the Real-Time Process Algebra (RTPA), which allows both architectural and behavioral models of ADTs and complex data objects. Formal architectures, static behaviors, and dynamic behaviors of a set of ADTs are comparatively studied. The architectural models of the ADTs are created using RTPA architectural modeling methodologies known as the Unified Data Models (UDMs). The static behaviors of the ADTs are specified and refined by a set of Unified Process Models (UPMs) of RTPA. The dynamic behaviors of the ADTs are modeled by process dispatching technologies of RTPA. This work has been applied in a number of real-time and non-real-time system designs such as a Real-Time Operating System (RTOS+), a Cognitive Learning Engine (CLE), and the automatic code generator based on RTPA.


Author(s):  
Yingxu Wang ◽  
Xinming Tan ◽  
Cyprian F. Ngolah

Real-Time Process Algebra (RTPA) is a denotational mathematics for the algebraic modeling and manipulations of software system architectures and behaviors by the Unified Data Models (UDMs) and Unified Process Models (UPMs). On the basis of the RTPA specification and refinement methodologies, automatic software code generation is enabled toward improving software development productivity. This paper examines designing and developing the RTPA-based software code generator (RTPA-CG) that transfers system models in RTPA architectures and behaviors into C++ or Java. A two-phrase strategy has been employed in the design of the code generator. The first phrase analyzes the lexical, syntactical, and type specifications of a software system modeled in RTPA, which results in a set of abstract syntax trees (ASTs). The second phrase translates the ASTs into C++ or Java based on predesigned mapping strategies and code generation rules. The toolkit of RTPA code generator encompasses an RTPA lexer, parser, type-checker, and a code builder. Experimental results show that system models in RTPA can be rigorously processed and corresponding C++/Java code can be automatically generated using the toolkit. The code generated is executable and effective under the support of an RTPA run-time library.


Author(s):  
Witold Pedrycz

Information granules and ensuing Granular Computing offer interesting opportunities to endow processing with an important facet of human-centricity. This facet implies that the underlying processing supports non-numeric data inherently associated with the variable perception of humans. Systems that commonly become distributed and hierarchical, managing granular information in hierarchical and distributed architectures, is of growing interest, especially when invoking mechanisms of knowledge generation and knowledge sharing. The outstanding feature of human centricity of Granular Computing along with essential fuzzy set-based constructs constitutes the crux of this study. The author elaborates on some new directions of knowledge elicitation and quantification realized in the setting of fuzzy sets. With this regard, the paper concentrates on knowledge-based clustering. It is also emphasized that collaboration and reconciliation of locally available knowledge give rise to the concept of higher type information granules. Other interesting directions enhancing human centricity of computing with fuzzy sets deals with non-numeric semi-qualitative characterization of information granules, as well as inherent evolving capabilities of associated human-centric systems. The author discusses a suite of algorithms facilitating a qualitative assessment of fuzzy sets, formulates a series of associated optimization tasks guided by well-formulated performance indexes, and discusses the underlying essence of resulting solutions.


Author(s):  
Nilar Aye ◽  
Takuro Ito ◽  
Fumio Hattori ◽  
Kazuhiro Kuwabara ◽  
Kiyoshi Yasuda

This paper proposes a remote conversation support system for people with aphasia. The aim of our system is to improve the quality of lives (QoL) of people suffering cognitive disabilities. In this framework, a topic list is used as a conversation assistant in addition to the video phone. The important feature is sharing the focus of attention on the topic list between a patient and the communication partner over the network to facilitate distant communication. The results of two preliminary experiments indicate the potential of the system.


Author(s):  
Claude Moulin ◽  
Marco Luca Sbodio

For e-Government applications, the symbiotic aspect must be taken into account at three stages: at design time in order to integrate the end-user, at delivery time when civil servants have to discover and interact with new services, at run time when ambient intelligence could help the interaction of citizens with specific services. In this paper, we focus on the first two steps. We show how interoperability issues must concern application designers. We also present how semantics can help civil servants when they have to deal with e-government service frameworks. We then describe an actual application developed during the European Terregov project where semantics is the key point for simplifying the role of citizens when requesting for health care services.


Author(s):  
Yingxu Wang ◽  
Yanan Zhang ◽  
Philip C.-Y. Sheu ◽  
Xuhui Li ◽  
Hong Guo

An Automated Teller Machine (ATM) is a safety-critical and real-time system that is highly complicated in design and implementation. This paper presents the formal design, specification, and modeling of the ATM system using a denotational mathematics known as Real-Time Process Algebra (RTPA). The conceptual model of the ATM system is introduced as the initial requirements for the system. The architectural model of the ATM system is created using RTPA architectural modeling methodologies and refined by a set of Unified Data Models (UDMs), which share a generic mathematical model of tuples. The static behaviors of the ATM system are specified and refined by a set of Unified Process Models (UPMs) for the ATM transition processing and system supporting processes. The dynamic behaviors of the ATM system are specified and refined by process priority allocation, process deployment, and process dispatch models. Based on the formal design models of the ATM system, code can be automatically generated using the RTPA Code Generator (RTPA-CG), or be seamlessly transformed into programs by programmers. The formal models of ATM may not only serve as a formal design paradigm of real-time software systems, but also a test bench for the expressive power and modeling capability of exiting formal methods in software engineering.


Author(s):  
Chandra Das ◽  
Pradipta Maji

In order to apply a powerful pattern recognition algorithm to predict functional sites in proteins, amino acids cannot be used directly as inputs since they are non-numerical variables. Therefore, they need encoding prior to input. In this regard, the bio-basis function maps a non-numerical sequence space to a numerical feature space. One of the important issues for the bio-basis function is how to select a minimum set of bio-basis strings with maximum information. In this paper, an efficient method to select bio-basis strings for the bio-basis function is described integrating the concepts of the Fisher ratio and “degree of resemblance”. The integration enables the method to select a minimum set of most informative bio-basis strings. The “degree of resemblance” enables efficient selection of a set of distinct bio-basis strings. In effect, it reduces the redundant features in numerical feature space. Quantitative indices are proposed for evaluating the quality of selected bio-basis strings. The effectiveness of the proposed bio-basis string selection method, along with a comparison with existing methods, is demonstrated on different data sets.


Author(s):  
Witold Kinsner ◽  
Hong Zhang

This paper presents estimations of multi-scale (multi-fractal) measures for feature extraction from deoxyribonucleic acid (DNA) sequences, and demonstrates the intriguing possibility of identifying biological functionality using information contained within the DNA sequence. We have developed a technique that seeks patterns or correlations in the DNA sequence at a higher level than the local base-pair structure. The technique has three main steps: (i) transforms the DNA sequence symbols into a modified Lévy walk, (ii) transforms the Lévy walk into a signal spectrum, and (iii) breaks the spectrum into sub-spectra and treats each of these as an attractor from which the multi-fractal dimension spectrum is estimated. An optimal minimum window size and volume element size are found for estimation of the multi-fractal measures. Experimental results show that DNA is multi-fractal, and that the multi-fractality changes depending upon the location (coding or non-coding region) in the sequence.


Author(s):  
Yang Liu ◽  
Luyang Jiao ◽  
Guohua Bai ◽  
Boqin Feng

From the perspective of cognitive informatics, cognition can be viewed as the acquisition of knowledge. In real-world applications, information systems usually contain some degree of noisy data. A new model proposed to deal with the hybrid-feature selection problem combines the neighbourhood approximation and variable precision rough set models. Then rule induction algorithm can learn from selected features in order to reduce the complexity of rule sets. Through proposed integration, the knowledge acquisition process becomes insensitive to the dimensionality of data with a pre-defined tolerance degree of noise and uncertainty for misclassification. When the authors apply the method to a Chinese diabetic diagnosis problem, the hybrid-attribute reduction method selected only five attributes from totally thirty-four measurements. Rule learner produced eight rules with average two attributes in the left part of an IF-THEN rule form, which is a manageable set of rules. The demonstrated experiment shows that the present approach is effective in handling real-world problems.


Sign in / Sign up

Export Citation Format

Share Document