Image Segmentation Using Electromagnetic Field Optimization (EFO) in E-Commerce Applications

Author(s):  
Pankaj Upadhyay ◽  
Jitender Kumar Chhabra

Image recognition plays a vital role in image-based product searches and false logo identification on e-commerce sites. For the efficient recognition of images, image segmentation is a very important and is an essential phase. This article presents a physics-inspired electromagnetic field optimization (EFO)-based image segmentation method which works using an automatic clustering concept. The proposed approach is a physics-inspired population-based metaheuristic that exploits the behavior of electromagnets and results into a faster convergence and a more accurate segmentation of images. EFO maintains a balance of exploration and exploitation using the nature-inspired golden ratio between attraction and repulsion forces and converges fast towards a globally optimal solution. Fixed length real encoding schemes are used to represent particles in the population. The performance of the proposed method is compared with recent state of the art metaheuristic algorithms for image segmentation. The proposed method is applied to the BSDS 500 image data set. The experimental results indicate better performance in terms of accuracy and convergence speed over the compared algorithms.

Author(s):  
Seyed Jalaleddin Mousavirad ◽  
Gerald Schaefer ◽  
Mahshid Helali Moghadam ◽  
Mehrdad Saadatmand ◽  
Mahdi Pedram

Author(s):  
Victer Paul ◽  
Ganeshkumar C ◽  
Jayakumar L

Genetic algorithms (GAs) are a population-based meta-heuristic global optimization technique for dealing with complex problems with a very large search space. The population initialization is a crucial task in GAs because it plays a vital role in the convergence speed, problem search space exploration, and also the quality of the final optimal solution. Though the importance of deciding problem-specific population initialization in GA is widely recognized, it is hardly addressed in the literature. In this article, different population seeding techniques for permutation-coded genetic algorithms such as random, nearest neighbor (NN), gene bank (GB), sorted population (SP), and selective initialization (SI), along with three newly proposed ordered-distance-vector-based initialization techniques have been extensively studied. The ability of each population seeding technique has been examined in terms of a set of performance criteria, such as computation time, convergence rate, error rate, average convergence, convergence diversity, nearest-neighbor ratio, average distinct solutions and distribution of individuals. One of the famous combinatorial hard problems of the traveling salesman problem (TSP) is being chosen as the testbed and the experiments are performed on large-sized benchmark TSP instances obtained from standard TSPLIB. The scope of the experiments in this article is limited to the initialization phase of the GA and this restricted scope helps to assess the performance of the population seeding techniques in their intended phase alone. The experimentation analyses are carried out using statistical tools to claim the unique performance characteristic of each population seeding techniques and best performing techniques are identified based on the assessment criteria defined and the nature of the application.


2019 ◽  
Vol 10 (2) ◽  
pp. 55-92 ◽  
Author(s):  
Victer Paul ◽  
Ganeshkumar C ◽  
Jayakumar L

Genetic algorithms (GAs) are a population-based meta-heuristic global optimization technique for dealing with complex problems with a very large search space. The population initialization is a crucial task in GAs because it plays a vital role in the convergence speed, problem search space exploration, and also the quality of the final optimal solution. Though the importance of deciding problem-specific population initialization in GA is widely recognized, it is hardly addressed in the literature. In this article, different population seeding techniques for permutation-coded genetic algorithms such as random, nearest neighbor (NN), gene bank (GB), sorted population (SP), and selective initialization (SI), along with three newly proposed ordered-distance-vector-based initialization techniques have been extensively studied. The ability of each population seeding technique has been examined in terms of a set of performance criteria, such as computation time, convergence rate, error rate, average convergence, convergence diversity, nearest-neighbor ratio, average distinct solutions and distribution of individuals. One of the famous combinatorial hard problems of the traveling salesman problem (TSP) is being chosen as the testbed and the experiments are performed on large-sized benchmark TSP instances obtained from standard TSPLIB. The scope of the experiments in this article is limited to the initialization phase of the GA and this restricted scope helps to assess the performance of the population seeding techniques in their intended phase alone. The experimentation analyses are carried out using statistical tools to claim the unique performance characteristic of each population seeding techniques and best performing techniques are identified based on the assessment criteria defined and the nature of the application.


2019 ◽  
Vol 2019 ◽  
pp. 1-20 ◽  
Author(s):  
Alkin Yurtkuran

Electromagnetic field optimization (EFO) is a relatively new physics-inspired population-based metaheuristic algorithm, which simulates the behavior of electromagnets with different polarities and takes advantage of a nature-inspired ratio, known as the golden ratio. In EFO, the population consists of electromagnetic particles made of electromagnets corresponding to variables of an optimization problem and is divided into three fields: positive, negative, and neutral. In each iteration, a new electromagnetic particle is generated based on the attraction-repulsion forces among these electromagnetic fields, where the repulsion force helps particle to avoid the local optimal point, and the attraction force leads to find global optimal. This paper introduces an improved version of the EFO called improved electromagnetic field optimization (iEFO). Distinct from the EFO, the iEFO has two novel modifications: new solution generation function for the electromagnets and adaptive control of algorithmic parameters. In addition to these major improvements, the boundary control and randomization procedures for the newly generated electromagnets are modified. In the computational studies, the performance of the proposed iEFO is tested against original EFO, existing physics-inspired algorithms, and state-of-the-art meta-heuristic algorithms as artificial bee colony algorithm, particle swarm optimization, and differential evolution. Obtained results are verified with statistical testing, and results reveal that proposed iEFO outperforms the EFO and other considered competitor algorithms by providing better results.


A novel optimal multi-level thresholding is proposed using gray scale images for Fractional-order Darwinian Particle Swarm Optimization (FDPSO) and Tsallis function. The maximization of Tsallis entropy is chosen as the Objective Function (OF) which monitors FDPSO’s exploration until the search converges to an optimal solution. The proposed method is tested on six standard test images and compared with heuristic methods, such as Bat Algorithm (BA) and Firefly Algorithm (FA). The robustness of the proposed thresholding procedure was tested and validated on the considered image data set with Poisson Noise (PN) and Gaussian Noise (GN). The results obtained with this study verify that, FDPSO offers better image quality measures when compared with BA and FA algorithms. Wilcoxon’s test was performed by Mean Structural Similarity Index (MSSIM), and the results prove that image segmentation is clear even in noisy dataset based on the statistical significance of the FDPSO with respect to BA and FA.


Author(s):  
K. RAJU ◽  
DR.M.NARSING YADAV ◽  
M. MARIYADAS

The humans have sense organs to sense the outside world. In these organs eyes are vital. The human eyes capture the light from the outside world and save the information as images in the brain. The human brain analyses the image data and gets the required information from the surroundings. Images are most prominent and easy way of representing a data. The art of representing information through the images is as old as the civilized man. Moreover the images can convey a clear data representation than the words or some other representation. Image segmentation is an old research topic, which has gained its importance in the past four decades. There are several previous methods for the segmentation. But there is no optimal solution for the judgment. This is because there is no specific benchmark for the judgment. In our project we propose a new method for the segmentation of an image called “The Normalized Graph Cut Segmentation”. It is a global view concept which considers image as a graph model. The segmentation is done by using the similarity measurement technique. The problems of over segmentation and effect of noise can be overcome by this technique. The method is tested for various test cases like the landscape images, texture based images, high density feature based images and the performance of the algorithm has been tabulated.


2021 ◽  
Vol 9 (2) ◽  
pp. 157
Author(s):  
Xi Yu ◽  
Bing Ouyang ◽  
Jose C. Principe

Deep neural networks provide remarkable performances on supervised learning tasks with extensive collections of labeled data. However, creating such large well-annotated data sets requires a considerable amount of resources, time and effort, especially for underwater images data sets such as corals and marine animals. Therefore, the overreliance on labels is one of the main obstacles for widespread applications of deep learning methods. In order to overcome this need for large annotated dataset, this paper proposes a label-efficient deep learning framework for image segmentation using only very sparse point-supervision. Our approach employs a latent Dirichlet allocation (LDA) with spatial coherence on feature space to iteratively generate pseudo labels. The method requires, as an initial condition, a Wide Residual Network (WRN) trained with sparse labels and mutual information constraints. The proposed method is evaluated on the sparsely labeled coral image data set collected from the Pulley Ridge region in the Gulf of Mexico. Experiments show that our method can improve image segmentation performance against sparsely labeled samples and achieves better results compared with other semi-supervised approaches.


Author(s):  
M. Jeyanthi ◽  
C. Velayutham

In Science and Technology Development BCI plays a vital role in the field of Research. Classification is a data mining technique used to predict group membership for data instances. Analyses of BCI data are challenging because feature extraction and classification of these data are more difficult as compared with those applied to raw data. In this paper, We extracted features using statistical Haralick features from the raw EEG data . Then the features are Normalized, Binning is used to improve the accuracy of the predictive models by reducing noise and eliminate some irrelevant attributes and then the classification is performed using different classification techniques such as Naïve Bayes, k-nearest neighbor classifier, SVM classifier using BCI dataset. Finally we propose the SVM classification algorithm for the BCI data set.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


Sign in / Sign up

Export Citation Format

Share Document