Intelligent Data Analysis
Latest Publications


TOTAL DOCUMENTS

18
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

Published By IGI Global

9781599049823, 9781599049830

2009 ◽  
pp. 286-299
Author(s):  
Lean Yu ◽  
Shouyang Wang ◽  
Kin Keung Lai

Financial crisis is a kind of typical rare event, but it is harmful to economic sustainable development if occurs. In this chapter, a Hilbert-EMD-based intelligent learning approach is proposed to predict financial crisis events for early-warning purpose. In this approach a typical financial indicator currency exchange rate reflecting economic fluctuation condition is first chosen. Then the Hilbert-EMD algorithm is applied to the economic indicator series. With the aid of the Hilbert-EMD procedure, some intrinsic mode components (IMCs) of the data series with different scales can be obtained. Using these IMCs, a support vector machine (SVM) classification paradigm is used to predict the future financial crisis events based upon some historical data. For illustration purposes, two typical Asian countries including South Korea and Thailand suffered from the 1997-1998 disastrous financial crisis experience are selected to verify the effectiveness of the proposed Hilbert-EMD-based SVM methodology.


2009 ◽  
pp. 236-253
Author(s):  
Malcolm J. Beynon

This chapter demonstrates intelligent data analysis, within the environment of uncertain reasoning, using the recently introduced CaRBS technique that has its mathematical rudiments in Dempster-Shafer theory. A series of classification and ranking analyses are undertaken on a bank rating application, looking at Moody’s bank financial strength rating (BFSR). The results presented involve the association of each bank to being low or high BFSR, with emphasis is on the graphical exposition of the results including the use of a series of simplex plots. Throughout the analysis there is discussion on how the present of ignorance in the results should be handled, whether it should be excluded (belief) or included (plausibility) in the evidence supporting the classification or ranking of the banks.


2009 ◽  
pp. 45-64
Author(s):  
Gráinne Kerr ◽  
Heather Ruskin ◽  
Martin Crane

Microarray technology1 provides an opportunity to monitor mRNA levels of expression of thousands of genes simultaneously in a single experiment. The enormous amount of data produced by this high throughput approach presents a challenge for data analysis: to extract meaningful patterns, to evaluate its quality, and to interpret the results. The most commonly used method of identifying such patterns is cluster analysis. Common and sufficient approaches to many data-mining problems, for example, Hierarchical, K-means, do not address well the properties of “typical” gene expression data and fail, in significant ways, to account for its profile. This chapter clarifies some of the issues and provides a framework to evaluate clustering in gene expression analysis. Methods are categorised explicitly in the context of application to data of this type, providing a basis for reverse engineering of gene regulation networks. Finally, areas for possible future development are highlighted.


2009 ◽  
pp. 185-200
Author(s):  
J. P. Ganjigatti ◽  
Dilip Kumar Pratihar

In this chapter, an attempt has been made to design suitable knowledge bases (KBs) for carrying out forward and reverse mappings of a Tungsten inert gas (TIG) welding process. In forward mapping, the outputs (also known as the responses) are expressed as the functions of the input variables (also called the factors), whereas in reverse mapping, the factors are represented as the functions of the responses. Both the forward as well as reverse mappings are required to conduct, for an effective online control of a process. Conventional statistical regression analysis is able to carry out the forward mapping efficiently but it may not be always able to solve the problem of reverse mapping. It is a novel attempt to conduct the forward and reverse mappings of a TIG welding process using fuzzy logic (FL)-based approaches and these are found to solve the said problem efficiently.


2009 ◽  
pp. 143-160
Author(s):  
M. C. Bartholomew-Biggs ◽  
Z. Ulanowski ◽  
S. Zakovic

We discuss some experience of solving an inverse light scattering problem for single, spherical, homogeneous particles using least squares global optimization. If there is significant noise in the data, the particle corresponding to the “best” solution may not correspond well to the “actual” particle. One way of overcoming this difficulty involves the use of peak positions in the experimental data as a means of distinguishing genuine from spurious solutions. We introduce two composite approaches which combine conventional data fitting with peak-matching and show that they lead to a more robust identification procedure.


2009 ◽  
pp. 103-119
Author(s):  
Arun Kulkarni ◽  
Sara McCaslin

This chapter introduces fuzzy neural network models as means for knowledge discovery from databases. It describes architectures and learning algorithms for fuzzy neural networks. In addition, it introduces an algorithm for extracting and optimizing classification rules from a trained fuzzy neural network. As an illustration, multispectral satellite images have been analyzed using fuzzy neural network models. The authors hope that fuzzy neural network models and the methodology for generating classification rules from data samples provide a valuable tool for knowledge discovery. The algorithms are useful in a variety of data mining applications such as environment change detection, military reconnaissance, crop yield prediction, financial crimes and money laundering, and insurance fraud detection.


2009 ◽  
pp. 1-17
Author(s):  
Martin Spott ◽  
Detlef Nauck

This chapter introduces a new way of using soft constraints for selecting data analysis methods that match certain user requirements. It presents a software platform for automatic data analysis that uses a fuzzy knowledge base for automatically selecting and executing data analysis methods. In order to support business users in running data analysis projects the analytical process must be automated as much as possible. The authors argue that previous approaches based on the formalisation of analytical processes were less successful because selecting and running analytical methods is very much an experience-led heuristic process. The authors show that a system based on a fuzzy knowledge base that stores heuristic expert knowledge about data analysis can successfully lead to automatic intelligent data analysis.


2009 ◽  
pp. 201-217
Author(s):  
Malcolm J. Beynon

This chapter considers the role of fuzzy decision trees as a tool for intelligent data analysis in domestic travel research. It demonstrates the readability and interpretability the findings from fuzzy decision tree analysis can pertain, first presented in a small problem allowing the fullest opportunity for the analysis to be followed. The investigation of the traffic fatalities in the states of the US offers an example of a more comprehensive fuzzy decision tree analysis. The graphical representations of the fuzzy based membership functions show how the necessary linguistic terms are defined. The final fuzzy decision trees, both tutorial and US traffic fatalities based, show the structured form the analysis offers, as well as more readable decision rules contained therein.


2009 ◽  
pp. 131-142
Author(s):  
Thomas E. Potok ◽  
Xiaohui Cui ◽  
Yu Jiao

The rate at which information overwhelms humans is significantly more than the rate at which humans have learned to process, analyze, and leverage this information. To overcome this challenge, new methods of computing must be formulated, and scientist and engineers have looked to nature for inspiration in developing these new methods. Consequently, evolutionary computing has emerged as new paradigm for computing, and has rapidly demonstrated its ability to solve real-world problems where traditional techniques have failed. This field of work has now become quite broad and encompasses areas ranging from artificial life to neural networks. This chapter specifically focuses on two sub-areas of nature-inspired computing: Evolutionary Algorithms and Swarm Intelligence.


2009 ◽  
pp. 300-308
Author(s):  
Chun-Jung Huang ◽  
Hsiao-Fan Wang ◽  
Shouyang Wang

One of the key problems in supervised learning is due to the insufficient size of the training data set. The natural way for an intelligent learning process to counter this problem and successfully generalize is to exploit prior information that may be available about the domain or that can be learned from prototypical examples. According to the concept of creating virtual samples, the intervalized kernel method of density estimation (IKDE) was proposed to improve the learning ability from a small data set. To demonstrate its theoretical validity, we provided a theorem based on Decomposition Theory. In addition, we proposed an alternative approach to achieving the better learning performance of IKDE.


Sign in / Sign up

Export Citation Format

Share Document