Nature-Inspired Algorithms for Big Data Frameworks - Advances in Computational Intelligence and Robotics
Latest Publications


TOTAL DOCUMENTS

15
(FIVE YEARS 15)

H-INDEX

2
(FIVE YEARS 2)

Published By IGI Global

9781522558521, 9781522558538

Author(s):  
Ajay Kaushik ◽  
S. Indu ◽  
Daya Gupta

Wireless sensor networks (WSNs) are becoming increasingly popular due to their applications in a wide variety of areas. Sensor nodes in a WSN are battery operated which outlines the need of some novel protocols that allows the limited sensor node battery to be used in an efficient way. The authors propose the use of nature-inspired algorithms to achieve energy efficient and long-lasting WSN. Multiple nature-inspired techniques like BBO, EBBO, and PSO are proposed in this chapter to minimize the energy consumption in a WSN. A large amount of data is generated from WSNs in the form of sensed information which encourage the use of big data tools in WSN domain. WSN and big data are closely connected since the large amount of data emerging from sensors can only be handled using big data tools. The authors describe how the big data can be framed as an optimization problem and the optimization problem can be effectively solved using nature-inspired algorithms.


Author(s):  
Parul Agarwal ◽  
Shikha Mehta

Subspace clustering approaches cluster high dimensional data in different subspaces. It means grouping the data with different relevant subsets of dimensions. This technique has become very effective as a distance measure becomes ineffective in a high dimensional space. This chapter presents a novel evolutionary approach to a bottom up subspace clustering SUBSPACE_DE which is scalable to high dimensional data. SUBSPACE_DE uses a self-adaptive DBSCAN algorithm to perform clustering in data instances of each attribute and maximal subspaces. Self-adaptive DBSCAN clustering algorithms accept input from differential evolution algorithms. The proposed SUBSPACE_DE algorithm is tested on 14 datasets, both real and synthetic. It is compared with 11 existing subspace clustering algorithms. Evaluation metrics such as F1_Measure and accuracy are used. Performance analysis of the proposed algorithms is considerably better on a success rate ratio ranking in both accuracy and F1_Measure. SUBSPACE_DE also has potential scalability on high dimensional datasets.


Author(s):  
Deepak Singh ◽  
Dilip Singh Sisodia ◽  
Pradeep Singh

Discretization is one of the popular pre-processing techniques that helps a learner overcome the difficulty in handling the wide range of continuous-valued attributes. The objective of this chapter is to explore the possibilities of performance improvement in large dimensional biomedical data with the alliance of machine learning and evolutionary algorithms to design effective healthcare systems. To accomplish the goal, the model targets the preprocessing phase and developed framework based on a Fisher Markov feature selection and evolutionary based binary discretization (EBD) for a microarray gene expression classification. Several experiments were conducted on publicly available microarray gene expression datasets, including colon tumors, and lung and prostate cancer. The performance is evaluated for accuracy and standard deviations, and is also compared with the other state-of-the-art techniques. The experimental results show that the EBD algorithm performs better when compared to other contemporary discretization techniques.


Author(s):  
Priti Srinivas Sajja ◽  
Rajendra Akerkar

Traditional approaches like artificial neural networks, in spite of their intelligent support such as learning from large amount of data, are not useful for big data analytics for many reasons. The chapter discusses the difficulties while analyzing big data and introduces deep learning as a solution. This chapter discusses various deep learning techniques and models for big data analytics. The chapter presents necessary fundamentals of an artificial neural network, deep learning, and big data analytics. Different deep models such as autoencoders, deep belief nets, convolutional neural networks, recurrent neural networks, reinforcement learning neural networks, multi model approach, parallelization, and cognitive computing are discussed here, with the latest research and applications. The chapter concludes with discussion on future research and application areas.


Author(s):  
Anuja Arora ◽  
Aman Srivastava ◽  
Shivam Bansal

The conventional approach to build a chatbot system uses the sequence of complex algorithms and productivity of these systems depends on order and coherence of algorithms. This research work introduces and showcases a deep learning-based conversation system approach. The proposed approach is an intelligent conversation model approach which conceptually uses graph model and neural conversational model. The proposed deep learning-based conversation system uses neural conversational model over knowledge graph model in a hybrid manner. Graph-based model answers questions written in natural language using its intent in the knowledge graph and neural conversational model converses answer based on conversation content and conversation sequence order. NLP is used in graph model and neural conversational model uses natural language understanding and machine intelligence. The neural conversational model uses seq2seq framework as it requires less feature engineering and lacks domain knowledge. The results achieved through the authors' approach are competitive with solely used graph model results.


Author(s):  
Shikha Mehta ◽  
Parmeet Kaur

Workflows are a commonly used model to describe applications consisting of computational tasks with data or control flow dependencies. They are used in domains of bioinformatics, astronomy, physics, etc., for data-driven scientific applications. Execution of data-intensive workflow applications in a reasonable amount of time demands a high-performance computing environment. Cloud computing is a way of purchasing computing resources on demand through virtualization technologies. It provides the infrastructure to build and run workflow applications, which is called ‘Infrastructure as a Service.' However, it is necessary to schedule workflows on cloud in a way that reduces the cost of leasing resources. Scheduling tasks on resources is a NP hard problem and using meta-heuristic algorithms is an obvious choice for the same. This chapter presents application of nature-inspired algorithms: particle swarm optimization, shuffled frog leaping algorithm and grey wolf optimization algorithm to the workflow scheduling problem on the cloud. Simulation results prove the efficacy of the suggested algorithms.


Author(s):  
Sumitra Mukhopadhyay ◽  
Soumyadip Das

Spectrum sensing errors in cognitive radio may occur due to constant changes in the environment like changes in background noise, movements of the users, temperature variations, etc. It leads to under usage of available spectrum bands or may cause interference to the primary user transmission. So, sensing parameters like detection threshold are required to adapt dynamically to the changing environment to minimise sensing errors. Correct sensing requires processing huge data sets just like Big Data. This chapter investigates sensing in light of Big Data and presents the study of the nature inspired algorithms in sensing error minimisation by dynamic adaptation of the threshold value. Death penalty constrained handing techniques are integrated to the genetic algorithm, particle swarm optimisation, the firefly algorithm and the bat algorithm. Based on them, four algorithms are developed for minimizing sensing errors. The reported algorithms are found to be faster and more accurate when compared with previously proposed threshold adaptation algorithms based on a gradient descend.


Author(s):  
Pavani Konagala

A large volume of data is stored electronically. It is very difficult to measure the total volume of that data. This large amount of data is coming from various sources such as stock exchange, which may generate terabytes of data every day, Facebook, which may take about one petabyte of storage, and internet archives, which may store up to two petabytes of data, etc. So, it is very difficult to manage that data using relational database management systems. With the massive data, reading and writing from and into the drive takes more time. So, the storage and analysis of this massive data has become a big problem. Big data gives the solution for these problems. It specifies the methods to store and analyze the large data sets. This chapter specifies a brief study of big data techniques to analyze these types of data. It includes a wide study of Hadoop characteristics, Hadoop architecture, advantages of big data and big data eco system. Further, this chapter includes a comprehensive study of Apache Hive for executing health-related data and deaths data of U.S. government.


Author(s):  
Mukta Goyal ◽  
Rajalakshmi Krishnamurthi

Due to the emerging e-learning scenario, there is a need for software agents to teach individual users according to their skill. This chapter introduces software agents for intelligent tutors for personalized learning of English. Software agents teach a user English on the aspects of reading, translation, and writing. Software agents help user to learn English through recognition and synthesis of human voice and helps users to improve on handwriting. Its main objective is to understand what aspect of the language users wants to learn. It deals with the intuitive nature of users' learning styles. To enable this feature, intelligent soft computing techniques have been used.


Author(s):  
Abhishek Ghosh Roy ◽  
Naba Kumar Peyada

Application of adaptive neuro fuzzy inference system (ANFIS)-based particle swarm optimization (PSO) algorithm to the problem of aerodynamic modeling and optimal parameter estimation for aircraft has been addressed in this chapter. The ANFIS-based PSO optimizer constitutes the aircraft model in restricted sense capable of predicting generalized force and moment coefficients employing measured motion and control variables only, without formal requirement of conventional variables or their time derivatives. It has been shown that such an approximate model can be used to extract equivalent stability and control derivatives of a rigid aircraft.


Sign in / Sign up

Export Citation Format

Share Document