International Journal of Computational Intelligence and Applications
Latest Publications


TOTAL DOCUMENTS

616
(FIVE YEARS 93)

H-INDEX

22
(FIVE YEARS 4)

Published By World Scientific

1757-5885, 1469-0268

Author(s):  
Bharathi Garimella ◽  
G. V. S. N. R. V. Prasad ◽  
M. H. M. Krishna Prasad

The churn prediction based on telecom data has been paid great attention because of the increasing the number telecom providers, but due to inconsistent data, sparsity, and hugeness, the churn prediction becomes complicated and challenging. Hence, an effective and optimal prediction of churns mechanism, named adaptive firefly-spider optimization (adaptive FSO) algorithm, is proposed in this research to predict the churns using the telecom data. The proposed churn prediction method uses telecom data, which is the trending domain of research in predicting the churns; hence, the classification accuracy is increased. However, the proposed adaptive FSO algorithm is designed by integrating the spider monkey optimization (SMO), firefly optimization algorithm (FA), and the adaptive concept. The input data is initially given to the master node of the spark framework. The feature selection is carried out using Kendall’s correlation to select the appropriate features for further processing. Then, the selected unique features are given to the master node to perform churn prediction. Here, the churn prediction is made using a deep convolutional neural network (DCNN), which is trained by the proposed adaptive FSO algorithm. Moreover, the developed model obtained better performance using the metrics, like dice coefficient, accuracy, and Jaccard coefficient by varying the training data percentage and selected features. Thus, the proposed adaptive FSO-based DCNN showed improved results with a dice coefficient of 99.76%, accuracy of 98.65%, Jaccard coefficient of 99.52%.


Author(s):  
Souhila Kahlouche ◽  
Mahmoud Belhocine ◽  
Abdallah Menouar

In this work, efficient human activity recognition (HAR) algorithm based on deep learning architecture is proposed to classify activities into seven different classes. In order to learn spatial and temporal features from only 3D skeleton data captured from a “Microsoft Kinect” camera, the proposed algorithm combines both convolution neural network (CNN) and long short-term memory (LSTM) architectures. This combination allows taking advantage of LSTM in modeling temporal data and of CNN in modeling spatial data. The captured skeleton sequences are used to create a specific dataset of interactive activities; these data are then transformed according to a view invariant and a symmetry criterion. To demonstrate the effectiveness of the developed algorithm, it has been tested on several public datasets and it has achieved and sometimes has overcome state-of-the-art performance. In order to verify the uncertainty of the proposed algorithm, some tools are provided and discussed to ensure its efficiency for continuous human action recognition in real time.


Author(s):  
Youssef Hami ◽  
Chakir Loqman

This research is an optimal allocation of tasks to processors in order to minimize the total costs of execution and communication. This problem is called the Task Assignment Problem (TAP) with nonuniform communication costs. To solve the latter, the first step concerns the formulation of the problem by an equivalent zero-one quadratic program with a convex objective function using a convexification technique, based on the smallest eigenvalue. The second step concerns the application of the Continuous Hopfield Network (CHN) to solve the obtained problem. The calculation results are presented for the instances from the literature, compared to solutions obtained both the CPLEX solver and by the heuristic genetic algorithm, and show an improvement in the results obtained by applying only the CHN algorithm. We can see that the proposed approach evaluates the efficiency of the theoretical results and achieves the optimal solutions in a short calculation time.


Author(s):  
Kumar Cherukupalli ◽  
Vijaya Anand N

In this paper, the optimal distribution generation (DG) size and location for power flow analysis at the smart grid by hybrid method are proposed. The proposed hybrid method is the Interactive Autodidactic School (IAS) and the Most Valuable Player Algorithm (MVPA) and commonly named as IAS-MVPA method. The main aim of this work is to reduce line loss and total harmonic distortion (THD), similarly, to recover the voltage profile of system through the optimal location and size of the distributed generators and optimal rearrangement of network. Here, IAS-MVPA method is utilized as a rectification tool to get the maximum DG size and the maximal reconfiguration of network at environmental load variation. In case of failure, the IAS method is utilized for maximizing the DG location. The IAS chooses the line of maximal power loss as optimal location to place the DG based on the objective function. The fault violates the equality and inequality restrictions of the safe limit system. From the control parameters, the low voltage drift is improved using the MVPA method. The low-voltage deviation has been exploited for obtaining the maximum capacity of the DG. After that, the maximum capacity is used at maximum location that improves the power flow of the system. The proposed system is performed on MATLAB/Simulink platform, and the effectiveness is assessed by comparing it with various existing processes such as generic algorithm (GA), Cuttle fish algorithm (CFA), adaptive grasshopper optimization algorithm (AGOA) and artificial neural network (ANN).


Author(s):  
Gokul Yenduri ◽  
B. R. Rajakumar ◽  
K. Praghash ◽  
D. Binu

The identification of opinions and sentiments from tweets is termed as “Twitter Sentiment Analysis (TSA)”. The major process of TSA is to determine the sentiment or polarity of the tweet and then classifying them into a negative or positive tweet. There are several methods introduced for carrying out TSA, however, it remains to be challenging due to slang words, modern accents, grammatical and spelling mistakes, and other issues that could not be solved by existing techniques. This work develops a novel customized BERT-oriented sentiment classification that encompasses two main phases: pre-processing and tokenization, and a “Customized Bidirectional Encoder Representations from Transformers (BERT)”-based classification. At first, the gathered raw tweets are pre-processed under stop-word removal, stemming and blank space removal. After pre-processing, the semantic words are obtained, from which the meaningful words (tokens) are extracted in the tokenization phase. Consequently, these extracted tokens are classified via optimized BERT, where biases and weight are tuned optimally by Particle-Assisted Circle Updating Position (PA-CUP). Moreover, the maximal sequence length of the BERT encoder is updated using standard PA-CUP. Finally, the performance analysis is carried out to substantiate the enhancement of the proposed model.


Author(s):  
Asieh Khosravanian ◽  
Mohammad Rahmanimanesh ◽  
Parviz Keshavarzi

The Social Spider Algorithm (SSA) was introduced based on the information-sharing foraging strategy of spiders to solve the continuous optimization problems. SSA was shown to have better performance than the other state-of-the-art meta-heuristic algorithms in terms of best-achieved fitness values, scalability, reliability, and convergence speed. By preserving all strengths and outstanding performance of SSA, we propose a novel algorithm named Discrete Social Spider Algorithm (DSSA), for solving discrete optimization problems by making some modifications to the calculation of distance function, construction of follow position, the movement method, and the fitness function of the original SSA. DSSA is employed to solve the symmetric and asymmetric traveling salesman problems. To prove the effectiveness of DSSA, TSPLIB benchmarks are used, and the results have been compared to the results obtained by six different optimization methods: discrete bat algorithm (IBA), genetic algorithm (GA), an island-based distributed genetic algorithm (IDGA), evolutionary simulated annealing (ESA), discrete imperialist competitive algorithm (DICA) and a discrete firefly algorithm (DFA). The simulation results demonstrate that DSSA outperforms the other techniques. The experimental results show that our method is better than other evolutionary algorithms for solving the TSP problems. DSSA can also be used for any other discrete optimization problem, such as routing problems.


Author(s):  
Jinfang Zeng ◽  
Youming Li ◽  
Yu Zhang ◽  
Da Chen

Environmental sound classification (ESC) is a challenging problem due to the complexity of sounds. To date, a variety of signal processing and machine learning techniques have been applied to ESC task, including matrix factorization, dictionary learning, wavelet filterbanks and deep neural networks. It is observed that features extracted from deeper networks tend to achieve higher performance than those extracted from shallow networks. However, in ESC task, only the deep convolutional neural networks (CNNs) which contain several layers are used and the residual networks are ignored, which lead to degradation in the performance. Meanwhile, a possible explanation for the limited exploration of CNNs and the difficulty to improve on simpler models is the relative scarcity of labeled data for ESC. In this paper, a residual network called EnvResNet for the ESC task is proposed. In addition, we propose to use audio data augmentation to overcome the problem of data scarcity. The experiments will be performed on the ESC-50 database. Combined with data augmentation, the proposed model outperforms baseline implementations relying on mel-frequency cepstral coefficients and achieves results comparable to other state-of-the-art approaches in terms of classification accuracy.


Author(s):  
Luca Donati ◽  
Eleonora Iotti ◽  
Andrea Prati

Products sorting is a task of paramount importance for many countries’ agricultural industry. An accurate quality check assures that good products are not wasted, and rotten, broken and bent food are properly discarded, which is extremely important for food production chains. Such products sorting and quality controls are often performed with consolidated instruments, since simple systems are easier to maintain, validate, and they speed up the processing in terms of production line speed and products per second. Moreover, industries often lack advanced formation, required for more sophisticated solutions. As a result, the sorting task for many food products is mainly done by color information only. Sorting machines typically detect the color response of products to specific LEDs with various light wavelengths. Unfortunately, a color check is often not enough to detect some very common defects. The shape of a product, instead, reveals many important defects and is highly reliable in detecting external objects mixed with food. Also, shape can be used to take detailed measurements of a product, such as its area, length, width, anisotropy, etc. This paper proposes a complete treatment of the problem of sorting food by its shape. It treats real-world problems such as accuracy, execution time, latency and it provides an overview of a full system used on state-of-the-art measurement machines.


Sign in / Sign up

Export Citation Format

Share Document