Software and Intelligent Sciences
Latest Publications


TOTAL DOCUMENTS

29
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By IGI Global

9781466602618, 9781466602625

Author(s):  
Du Zhang

Software engineering research and practice thus far are primarily conducted in a value-neutral setting where each artifact in software development such as requirement, use case, test case, and defect, is treated as equally important during a software system development process. There are a number of shortcomings of such value-neutral software engineering. Value-based software engineering is to integrate value considerations into the full range of existing and emerging software engineering principles and practices. Machine learning has been playing an increasingly important role in helping develop and maintain large and complex software systems. However, machine learning applications to software engineering have been largely confined to the value-neutral software engineering setting. In this paper, the general message to be conveyed is to apply machine learning methods and algorithms to value-based software engineering. The training data or the background knowledge or domain theory or heuristics or bias used by machine learning methods in generating target models or functions should be aligned with stakeholders’ value propositions. An initial research agenda is proposed for machine learning in value-based software engineering.


Author(s):  
Yingxu Wang ◽  
Shushma Patel

It is recognized that software is a unique abstract artifact that does not obey any known physical laws. For software engineering to become a matured engineering discipline like others, it must establish its own theoretical framework and laws, which are perceived to be mainly relied on cognitive informatics and denotational mathematics, supplementing to computing science, information science, and formal linguistics. This paper analyzes the basic properties of software and seeks the cognitive informatics foundations of software engineering. The nature of software is characterized by its informatics, behavioral, mathematical, and cognitive properties. The cognitive informatics foundations of software engineering are explored on the basis of the informatics laws of software and software engineering psychology. A set of fundamental cognitive constraints of software engineering, such as intangibility, complexity, indeterminacy, diversity, polymorphism, inexpressiveness, inexplicit embodiment, and unquantifiable quality measures, is identified. The conservative productivity of software is revealed based on the constraints of human cognitive capacity.


Author(s):  
Janusz Kacprzyk ◽  
Slawomir Zadrozny

We consider linguistic database summaries in the sense of Yager (1982), in an implementable form proposed by Kacprzyk & Yager (2001) and Kacprzyk, Yager & Zadrozny (2000), exemplified by, for a personnel database, “most employees are young and well paid” (with some degree of truth) and their extensions as a very general tool for a human consistent summarization of large data sets. We advocate the use of the concept of a protoform (prototypical form), vividly advocated by Zadeh and shown by Kacprzyk & Zadrozny (2005) as a general form of a linguistic data summary. Then, we present an extension of our interactive approach to fuzzy linguistic summaries, based on fuzzy logic and fuzzy database queries with linguistic quantifiers. We show how fuzzy queries are related to linguistic summaries, and that one can introduce a hierarchy of protoforms, or abstract summaries in the sense of latest Zadeh’s (2002) ideas meant mainly for increasing deduction capabilities of search engines. We show an implementation for the summarization of Web server logs.


Author(s):  
Yingxu Wang

Inspired by the latest development in cognitive informatics and contemporary denotational mathematics, cognitive computing is an emerging paradigm of intelligent computing methodologies and systems, which implements computational intelligence by autonomous inferences and perceptions mimicking the mechanisms of the brain. This article presents a survey on the theoretical framework and architectural techniques of cognitive computing beyond conventional imperative and autonomic computing technologies. Theoretical foundations of cognitive computing are elaborated from the aspects of cognitive informatics, neural informatics, and denotational mathematics. Conceptual models of cognitive computing are explored on the basis of the latest advances in abstract intelligence and computational intelligence. Applications of cognitive computing are described from the aspects of autonomous agent systems and cognitive search engines, which demonstrate how machine and computational intelligence may be generated and implemented by cognitive computing theories and technologies toward autonomous knowledge processing.


Author(s):  
Jeffrey J.P. Tsai ◽  
Jia Zhang ◽  
Jeff J.S. Huang ◽  
Stephen J.H. Yang

This article presents an intelligent social grouping service for identifying right participants to support CSCW and CSCL. We construct a three-layer hierarchical social network, in which we identify two important relationship ties – a knowledge relationship tie and a social relationship tie. We use these relationship ties as metric to measure the collaboration strength between pairs of participants in a social network. The stronger the knowledge relationship tie, the more knowledgeable the participants; the stronger the social relationship tie, the more likely the participants are willing to share their knowledge. By analyzing and calculating these relationship ties among peers using our computational models, we present a systematic way to discover collaboration peers according to configurable and customizable requirements. Experiences of social grouping services for identifying communities of practice through peer-to-peer search are also reported.


Author(s):  
Bo Zhang ◽  
Franklin W. Schwartz ◽  
Daoqin Tong

Using the TOPEX radar altimeter for land cover studies has been of great interest due to the TOPEX near global coverage and its consistent availability of waveform data for about one and a half decades from 1992 to 2005. However, the complexity of the TOPEX Sensor Data Records (SDRs) makes the recognition of the radar echoes particularly difficult. In this article, artificial neural computation as one of the most powerful algorithms in pattern recognition is investigated for water ratio assessment over Lake of the Woods area using TOPEX reflected radar signals. Results demonstrate that neural networks have the capability in identifying water proportion from the TOPEX radar information, controlling the predicted errors in a reasonable range.


Author(s):  
Luis F. de Mingo ◽  
Nuria Gómez ◽  
Fernando Arroyo ◽  
Juan Castellanos

This article presents a neural network model that permits to build a conceptual hierarchy to approximate functions over a given interval. Bio-inspired axo-axonic connections are used. In these connections the signal weight between two neurons is computed by the output of other neuron. Such arquitecture can generate polynomial expressions with lineal activation functions. This network can approximate any pattern set with a polynomial equation. This neural system classifies an input pattern as an element belonging to a category that the system has, until an exhaustive classification is obtained. The proposed model is not a hierarchy of neural networks, it establishes relationships among all the different neural networks in order to propagate the activation. Each neural network is in charge of the input pattern recognition to any prototyped category, and also in charge of transmitting the activation to other neural networks to be able to continue with the approximation.


Author(s):  
Sanjay Misra

One of the major issues in software engineering is the measurement. Since traditional measurement theory has problem in defining empirical observations on software entities in terms of their measured quantities, Morasca tried to solve this problem by proposing Weak Measurement theory. Further, in calculating complexity of software, the emphasis is mostly given to the computational complexity, algorithm complexity, functional complexity, which basically estimates the time, efforts, computability and efficiency. On the other hand, understandability and compressibility of the software which involves the human interaction are neglected in existing complexity measures. Recently, cognitive complexity (CC) to calculate the architectural and operational complexity of software was proposed to fill this gap. In this paper, we evaluated CC against the principle of weak measurement theory. We find that, the approach for measuring CC is more realistic and practical in comparison to existing approaches and satisfies most of the parameters required from measurement theory.


Author(s):  
Yucong Duan

Firstly this article presents a thorough discussion of semantics formalization related issues in model driven engineering (MDE). Then motivated for the purpose of software implementation, and attempts to overcome the shortcomings of incompleteness and context-sensitivity in the existing models, we propose to study formalization of semantics from a cognitive background. Issues under study cover the broad scope of overlap vs. incomplete vs. complete, closed world assumption (CWA) vs. open world assumption (OWA), Y(Yes)/N(No) vs. T(True)/F(False), subjective (SUBJ) vs. objective (OBJ), static vs. dynamic, unconsciousness vs. conscious, human vs. machine aspects, and so forth. A semantics formalization approach called EID-SCE (Existence Identification Dualism-Semantics Cosmos Explosion) is designed to meet both the theoretical investigation and implementation of the proposed formalization goals. EID-SCE supports the measure/evaluation in a {complete, no overlap} manner whether a given concept or feature is an improvement. Some elementary cases are also shown to demonstrate the feasibility of EID-SCE.


Author(s):  
Ning Fang ◽  
Xiangfeng Luo ◽  
Weimin Xu

Based on the principle of cognitive economy, the complexity and the information of textual context are proposed to measure subjective cognitive degree of textual context. Based on minimization of Boolean complexity in human concept learning, the complexity and the difficulty of textual context are defined in order to mimic human’s reading experience. Based on maximal relevance principle, the information and cognitive degree of textual context are defined in order to mimic human’s cognitive sense. Experiments verify that more contexts are added, more easily the text is understood by a machine, which is consistent with the linguistic viewpoint that context can help to understand a text; furthermore, experiments verify that the author-given sentence sequence includes the less complexity and the more information than other sentence combinations, that is to say, author-given sentence sequence is more easily understood by a machine. So the principles of simplicity and maximal relevance actually exist in text writing process, which is consistent with the cognitive science viewpoint. Therefore, this chapter’s measuring methods are validated from the linguistic and cognitive perspectives, and it could provide a theoretical foundation for machine-based text understanding.


Sign in / Sign up

Export Citation Format

Share Document