Efficiency and Scalability Methods for Computational Intellect
Latest Publications


TOTAL DOCUMENTS

14
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

Published By IGI Global

9781466639423, 9781466639430

Author(s):  
Natalia D. Nikolova ◽  
Kiril I. Tenekedjiev

The chapter focuses on the analysis of scaling constants when constructing a utility function over multi-dimensional prizes. Due to fuzzy rationality, those constants are elicited in an interval form. It is assumed that the decision maker has provided additional information describing the uncertainty of the scaling constants’ values within their uncertainty interval. The non-uniform method is presented to find point estimates of the interval scaling constants and to test their unit sum. An analytical solution of the procedure to construct the distribution of the interval scaling constants is provided, along with its numerical realization. A numerical procedure to estimate pvalue of the statistical test is also presented. The method allows making an uncertainty description of constants through different types of probability distributions and fuzzy sets.


Author(s):  
Pierre-Emmanuel Leni ◽  
Yohan D. Fougerolle ◽  
Frédéric Truchetet

In 1900, Hilbert declared that high order polynomial equations could not be solved by sums and compositions of continuous functions of less than three variables. This statement was proven wrong by the superposition theorem, demonstrated by Arnol’d and Kolmogorov in 1957, which allows for writing all multivariate functions as sums and compositions of univariate functions. Amongst recent computable forms of the theorem, Igelnik and Parikh’s approach, known as the Kolmogorov Spline Network (KSN), offers several alternatives for the univariate functions as well as their construction. A novel approach is presented for the embedding of authentication data (black and white logo, translucent or opaque image) in images. This approach offers similar functionalities than watermarking approaches, but relies on a totally different theory: the mark is not embedded in the 2D image space, but it is rather applied to an equivalent univariate representation of the transformed image. Using the progressive transmission scheme previously proposed (Leni, 2011), the pixels are re-arranged without any neighborhood consideration. Taking advantage of this naturally encrypted representation, it is proposed to embed the watermark in these univariate functions. The watermarked image can be accessed at any intermediate resolution, and fully recovered (by removing the embedded mark) without loss using a secret key. Moreover, the key can be different for every resolution, and both the watermark and the image can be globally restored in case of data losses during the transmission. These contributions lie in proposing a robust embedding of authentication data (represented by a watermark) into an image using the 1D space of univariate functions based on the Kolmogorov superposition theorem. Lastly, using a key, the watermark can be removed to restore the original image.


Author(s):  
Óscar Fontenla-Romero ◽  
Bertha Guijarro-Berdiñas ◽  
David Martinez-Rego ◽  
Beatriz Pérez-Sánchez ◽  
Diego Peteiro-Barral

Machine Learning (ML) addresses the problem of adjusting those mathematical models which can accurately predict a characteristic of interest from a given phenomenon. They achieve this by extracting information from regularities contained in a data set. From its beginnings two visions have always coexisted in ML: batch and online learning. The former assumes full access to all data samples in order to adjust the model whilst the latter overcomes this limiting assumption thus expanding the applicability of ML. In this chapter, we review the general framework and methods of online learning since its inception are reviewed and its applicability in current application areas is explored.


Author(s):  
Amparo Alonso-Betanzos ◽  
Verónica Bolón-Canedo ◽  
Diego Fernández-Francos ◽  
Iago Porto-Díaz ◽  
Noelia Sánchez-Maroño

With the advent of high dimensionality, machine learning researchers are now interested not only in accuracy, but also in scalability of algorithms. When dealing with large databases, pre-processing techniques are required to reduce input dimensionality and machine learning can take advantage of feature selection, which consists of selecting the relevant features and discarding irrelevant ones with a minimum degradation in performance. In this chapter, we will review the most up-to-date feature selection methods, focusing on their scalability properties. Moreover, we will show how these learning methods are enhanced when applied to large scale datasets and, finally, some examples of the application of feature selection in real world databases will be shown.


Author(s):  
Tohru Nitta

This chapter reviews the widely linear estimation for complex numbers, quaternions, and geometric algebras (or Clifford algebras) and their application examples. It was proved effective mathematically to add , the complex conjugate number of , as an explanatory variable in estimation of complex-valued data in 1995. Thereafter, the technique has been extended to higher-dimensional algebras. The widely linear estimation improves the accuracy and the efficiency of estimation, then expands the scalability of the estimation framework, and is applicable and useful for many fields including neural computing with high-dimensional parameters.


Author(s):  
Tsvi Achler

The brain’s neuronal circuits that are responsible for recognition and attention are not completely understood. Several potential circuits have been proposed using different mechanisms. These models may vary in the number connection parameters, the meaning of each connection weight, the efficiency, and the ability to scale to larger networks. Explicit analysis of these issues is important because for example, certain models may require an implausible number of connections (greater than available in the brain) in order to process the amount of information the brain can process. Moreover certain classifiers may perform recognition, but may be difficult to efficiently integrate with attention models. In this chapter, some of the limitations and scalability issues are discussed and a class of models that may address them is suggested. The focus is on modeling both recognition and a form attention called biased competition. Models are also explored that are both static and dynamic during recognition.


Author(s):  
Dawei Du ◽  
Dan Simon

Biogeography-based optimization (BBO) is a recently-developed heuristic algorithm that has shown impressive performance and efficiency over many standard benchmarks. The application of BBO is still limited because it was only developed four years ago. The objective of this chapter is to expand the application of BBO to large scale combinatorial problems. This chapter addresses the solution of combinatorial problems based on BBO combined with five techniques: (1) nearest neighbor algorithm (NNA), (2) crossover methods designed for traveling salesman problems (TSPs), (3) local optimization methods, (4) greedy methods, and (5) density-based spatial clustering of applications with noise (DBSCAN). This chapter also provides a discussion about the advantages and disadvantages for each of these five techniques when used with BBO, and describes the construction of a combinatorial solver based on BBO. In the end, a framework is proposed for large scale combinatorial problems based on hybrid BBO. Based on four benchmark problems, the experimental results demonstrate the quality and efficiency of our framework. On average, the algorithm reduces costs by over 69% for a 2152-city TSP compared to other methods: genetic algorithm (GA), ant colony optimization (ACO), nearest neighbor algorithm (NNA), and simulated annealing (SA). Convergence time for the algorithm is only 28.56 sec on a 1.73-GHz quad core PC with 6 GB of RAM . The algorithm also demonstrated good results for small and medium sized problems such as ulysses16 (16-city TSP, where we obtained the best performance), st70 (70-city TSP, where the second best performance was obtained), and rat575 (575-city TSP, where the second best performance was obtained).


Author(s):  
Peter J. Hawrylak ◽  
Chris Hartney ◽  
Michael Haney ◽  
Jonathan Hamm ◽  
John Hale

Identifying the level of intelligence of a cyber-attacker is critical to detecting cyber-attacks and determining the next targets or steps of the adversary. This chapter explores intrusion detection systems (IDSs) which are the traditional tool for cyber-attack detection, and attack graphs which are a formalism used to model cyber-attacks. The time required to detect an attack can be reduced by classifying the attacker’s knowledge about the system to determine the traces or signatures for the IDS to look for in the audit logs. The adversary’s knowledge of the system can then be used to identify their most likely next steps from the attack graph. A computationally efficient technique to compute the likelihood and impact of each step of an attack is presented. The chapter concludes with a discussion describing the next steps for implementation of these processes in specialized hardware to achieve real-time attack detection.


Author(s):  
Inna Stainvas ◽  
Alexandra Manevitch

Computer aided detection (CAD) system for cancer detection from X-ray images is highly requested by radiologists. For CAD systems to be successful, a large amount of data has to be collected. This poses new challenges for developing learning algorithms that are efficient and scalable to large dataset sizes. One way to achieve this efficiency is by using good feature selection.


Author(s):  
George Thomas ◽  
Timothy Wilmot ◽  
Steve Szatmary ◽  
Dan Simon ◽  
William Smith

This chapter discusses closed-loop control development and simulation results for a semi-active above-knee prosthesis. This closed-loop control is a delta control that is added to previously developed open-loop control. The control signal consists of two hydraulic valve settings. These valves control a rotary actuator that provides torque to the prosthetic knee. Closed-loop control using artificial neural networks (ANNs) are developed, which is an intelligent control method. The ANNs are trained with biogeography-based optimization (BBO), which is a recently developed evolutionary algorithm. This research contributes to the field of evolutionary algorithms by demonstrating that BBO is successful at finding optimal solutions to real-world, nonlinear, time varying control problems. The research contributes to the field of prosthetics by showing that it is possible to find effective closed-loop control signals for a newly proposed semi-active hydraulic knee prosthesis. The research also contributes to the field of ANNs; it shows that they are able to mitigate some of the effects of noise and disturbances that will be common in normal operation of a prosthesis and that they can provide better robustness and safer operation with less risk of stumbles and falls. It is demonstrated that ANNs are able to improve average performance over open-loop control by up to 8% and that they show the greatest improvement in performance when there is high risk of stumbles.


Sign in / Sign up

Export Citation Format

Share Document