Advances in Systems Analysis, Software Engineering, and High Performance Computing - Deep Learning Techniques and Optimization Strategies in Big Data Analytics
Latest Publications


TOTAL DOCUMENTS

17
(FIVE YEARS 17)

H-INDEX

2
(FIVE YEARS 2)

Published By IGI Global

9781799811923, 9781799811947

Author(s):  
Pattabiraman V. ◽  
Parvathi R.

Natural data erupting directly out of various data sources, such as text, image, video, audio, and sensor data, comes with an inherent property of having very large dimensions or features of the data. While these features add richness and perspectives to the data, due to sparsity associated with them, it adds to the computational complexity while learning, unable to visualize and interpret them, thus requiring large scale computational power to make insights out of it. This is famously called “curse of dimensionality.” This chapter discusses the methods by which curse of dimensionality is cured using conventional methods and analyzes its performance for given complex datasets. It also discusses the advantages of nonlinear methods over linear methods and neural networks, which could be a better approach when compared to other nonlinear methods. It also discusses future research areas such as application of deep learning techniques, which can be applied as a cure for this curse.


Author(s):  
J. Joshua Thomas ◽  
Tran Huu Ngoc Tran ◽  
Gilberto Pérez Lechuga ◽  
Bahari Belaton

Applying deep learning to the pervasive graph data is significant because of the unique characteristics of graphs. Recently, substantial amounts of research efforts have been keen on this area, greatly advancing graph-analyzing techniques. In this study, the authors comprehensively review different kinds of deep learning methods applied to graphs. They discuss with existing literature into sub-components of two: graph convolutional networks, graph autoencoders, and recent trends including chemoinformatics research area including molecular fingerprints and drug discovery. They further experiment with variational autoencoder (VAE) analyze how these apply in drug target interaction (DTI) and applications with ephemeral outline on how they assist the drug discovery pipeline and discuss potential research directions.


Author(s):  
Rajalakshmi R. ◽  
Hans Tiwari ◽  
Jay Patel ◽  
Rameshkannan R. ◽  
Karthik R.

The Gen Z kids highly rely on internet for various purposes like entertainment, sports, and school projects. There is a demand for parental control systems to monitor the children during their surfing time. Current web page classification approaches are not effective as handcrafted features are extracted from the web content and machine learning techniques are used that need domain knowledge. Hence, a deep learning approach is proposed to perform URL-based web page classification. As the URL is a short text, the model should learn to understand where the important information is present in the URL. The proposed system integrates the strength of attention mechanism with recurrent convolutional neural network for effective learning of context-aware URL features. This enhanced architecture improves the design of kids-relevant URL classification. By conducting various experiments on the benchmark collection Open Directory Project, it is shown that an accuracy of 0.8251 was achieved.


Author(s):  
BURCU YILMAZ ◽  
Hilal Genc ◽  
Mustafa Agriman ◽  
Bugra Kaan Demirdover ◽  
Mert Erdemir ◽  
...  

Graphs are powerful data structures that allow us to represent varying relationships within data. In the past, due to the difficulties related to the time complexities of processing graph models, graphs rarely involved machine learning tasks. In recent years, especially with the new advances in deep learning techniques, increasing number of graph models related to the feature engineering and machine learning are proposed. Recently, there has been an increase in approaches that automatically learn to encode graph structure into low dimensional embedding. These approaches are accompanied by models for machine learning tasks, and they fall into two categories. The first one focuses on feature engineering techniques on graphs. The second group of models assembles graph structure to learn a graph neighborhood in the machine learning model. In this chapter, the authors focus on the advances in applications of graphs on NLP using the recent deep learning models.


Author(s):  
Timothy Ganesan ◽  
Pandian Vasant ◽  
Igor Litvinchev

As industrial systems become more complex, various complexities and uncertainties come into play. Metaheuristic-type optimization techniques have become crucial for effective design, maintenance, and operations of such systems. However, in highly complex industrial systems, conventional metaheuristics are still plagued by various drawbacks. Strategies such as hybridization and algorithmic modifications have been the focus of previous efforts to improve the performance of conventional metaheuristics. This work tackles a large-scale multi-objective (MO) optimization problem: biofuel supply chain. Due to the scale and complexity of the problem, the random matrix approach was employed to modify the stochastic generator segment of the cuckoo search (CS) technique. Comparative analysis was then performed on the computational results produced by the conventional CS technique and the improved CS variants.


Author(s):  
Anongpun Man-Im ◽  
Weerakorn Ongsakul ◽  
Nimal Madhu M.

Power system scheduling is one of the most complex multi-objective scheduling problems, and a heuristic optimization method is designed for finding the OPF solution. Stochastic weight trade-off chaotic mutation-based non-dominated sorting particle swarm optimization algorithm can improve solution-search-capability by balancing between global best exploration and local best utilization through the stochastic weight and dynamic coefficient trade-off methods. This algorithm with chaotic mutation enhances diversity and search-capability, preventing premature convergence. Non-dominated sorting and crowding distance techniques efficiently provide the optimal Pareto front. Fuzzy function is used to select the local best compromise. Using a two-stage approach, the global best solution is selected from many local trials. The discussed approach can schedule the generators in the systems effectively, leading to savings in fuel cost, reduction in active power loss and betterment in voltage stability.


Author(s):  
Menaga D. ◽  
Revathi S.

Multimedia application is a significant and growing research area because of the advances in technology of software engineering, storage devices, networks, and display devices. With the intention of satisfying multimedia information desires of users, it is essential to build an efficient multimedia information process, access, and analysis applications, which maintain various tasks, like retrieval, recommendation, search, classification, and clustering. Deep learning is an emerging technique in the sphere of multimedia information process, which solves both the crisis of conventional and recent researches. The main aim is to resolve the multimedia-related problems by the use of deep learning. The deep learning revolution is discussed with the depiction and feature. Finally, the major application also explained with respect to different fields. This chapter analyzes the crisis of retrieval after providing the successful discussion of multimedia information retrieval that is the ability of retrieving an object of every multimedia.


Author(s):  
Fawaz H. H. Mahyoub ◽  
Rosni Abdullah

The prediction of protein secondary structure from a protein sequence provides useful information for predicting the three-dimensional structure and function of the protein. In recent decades, protein secondary structure prediction systems have been improved benefiting from the advances in computational techniques as well as the growth and increased availability of solved protein structures in protein data banks. Existing methods for predicting the secondary structure of proteins can be roughly subdivided into statistical, nearest-neighbor, machine learning, meta-predictors, and deep learning approaches. This chapter provides an overview of these computational approaches to predict the secondary structure of proteins, focusing on deep learning techniques, with highlights on key aspects in each approach.


Author(s):  
Anoop Balakrishnan Kadan ◽  
Perumal Sankar Subbian ◽  
Jeyakrishnan V. ◽  
Hariharan N. ◽  
Roshini T. V. ◽  
...  

Diabetic retinopathy (DR), which affects the blood vessels of the human retina, is considered to be the most serious complication prevalent among diabetic patients. If detected successfully at an early stage, the ophthalmologist would be able to treat the patients by advanced laser treatment to prevent total blindness. In this study, a technique based on morphological image processing and fuzzy logic to detect hard exudates from DR retinal images is explored. The proposed technique is to classify the eye by using a neural network approach (classifier) to predict whether it is affected or not. Here, a classifier is added before the fuzzy logic. This fuzzy will tell how much and where it is affected. The proposed technique will tell whether the eye is abnormal or normal.


Author(s):  
Md. Shokor A. Rahaman ◽  
Pandian Vasant

Total organic carbon (TOC) is the most significant factor for shale oil and gas exploration and development which can be used to evaluate the hydrocarbon generation potential of source rock. However, estimating TOC is a challenge for the geological engineers because direct measurements of core analysis geochemical experiments are time-consuming and costly. Therefore, many AI technique has used for TOC content prediction in the shale reservoir where AI techniques have impacted positively. Having both strength and weakness, some of them can execute quickly and handle high dimensional data while others have limitation for handling the uncertainty, learning difficulties, and unable to deal with high or low dimensional datasets which reminds the “no free lunch” theorem where it has been proven that no technique or system be relevant to all issues in all circumstances. So, investigating the cutting-edge AI techniques is the contribution of this study as the resulting analysis gives top to bottom understanding of the different TOC content prediction strategies.


Sign in / Sign up

Export Citation Format

Share Document