scholarly journals Evolving Deep DenseBlock Architecture Ensembles for Image Classification

Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1880
Author(s):  
Ben Fielding ◽  
Li Zhang

Automatic deep architecture generation is a challenging task, owing to the large number of controlling parameters inherent in the construction of deep networks. The combination of these parameters leads to the creation of large, complex search spaces that are feasibly impossible to properly navigate without a huge amount of resources for parallelisation. To deal with such challenges, in this research we propose a Swarm Optimised DenseBlock Architecture Ensemble (SODBAE) method, a joint optimisation and training process that explores a constrained search space over a skeleton DenseBlock Convolutional Neural Network (CNN) architecture. Specifically, we employ novel weight inheritance learning mechanisms, a DenseBlock skeleton architecture, as well as adaptive Particle Swarm Optimisation (PSO) with cosine search coefficients to devise networks whilst maintaining practical computational costs. Moreover, the architecture design takes advantage of recent advancements of the concepts of residual connections and dense connectivity, in order to yield CNN models with a much wider variety of structural variations. The proposed weight inheritance learning schemes perform joint optimisation and training of the architectures to reduce the computational costs. Being evaluated using the CIFAR-10 dataset, the proposed model shows great superiority in classification performance over other state-of-the-art methods while illustrating a greater versatility in architecture generation.

2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Antonino Laudani ◽  
Francesco Riganti Fulginei ◽  
Alessandro Salvini ◽  
Gabriele Maria Lozito ◽  
Salvatore Coco

In recent years several numerical methods have been proposed to identify the five-parameter model of photovoltaic panels from manufacturer datasheets also by introducing simplification or approximation techniques. In this paper we present a fast and accurate procedure for obtaining the parameters of the five-parameter model by starting from its reduced form. The procedure allows characterizing, in few seconds, thousands of photovoltaic panels present on the standard databases. It introduces and takes advantage of further important mathematical considerations without any model simplifications or data approximations. In particular the five parameters are divided in two groups, independent and dependent parameters, in order to reduce the dimensions of the search space. The partitioning of the parameters provides a strong advantage in terms of convergence, computational costs, and execution time of the present approach. Validations on thousands of photovoltaic panels are presented that show how it is possible to make easy and efficient the extraction process of the five parameters, without taking care of choosing a specific solver algorithm but simply by using any deterministic optimization/minimization technique.


2017 ◽  
Vol 13 (1) ◽  
pp. 155014771668368 ◽  
Author(s):  
Charissa Ann Ronao ◽  
Sung-Bae Cho

Human activity recognition has been gaining more and more attention from researchers in recent years, particularly with the use of widespread and commercially available devices such as smartphones. However, most of the existing works focus on discriminative classifiers while neglecting the inherent time-series and continuous characteristics of sensor data. To address this, we propose a two-stage continuous hidden Markov model framework, which also takes advantage of the innate hierarchical structure of basic activities. This kind of system architecture not only enables the use of different feature subsets on different subclasses, which effectively reduces feature computation overhead, but also allows for varying number of states and iterations. Experiments show that the hierarchical structure dramatically increases classification performance. We analyze the behavior of the accelerometer and gyroscope signals for each activity through graphs, and with added fine tuning of states and training iterations, the proposed method is able to achieve an overall accuracy of up to 93.18%, which is the best performance among the state-of-the-art classifiers for the problem at hand.


Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3011
Author(s):  
Drishti Yadav

This paper introduces a novel population-based bio-inspired meta-heuristic optimization algorithm, called Blood Coagulation Algorithm (BCA). BCA derives inspiration from the process of blood coagulation in the human body. The underlying concepts and ideas behind the proposed algorithm are the cooperative behavior of thrombocytes and their intelligent strategy of clot formation. These behaviors are modeled and utilized to underscore intensification and diversification in a given search space. A comparison with various state-of-the-art meta-heuristic algorithms over a test suite of 23 renowned benchmark functions reflects the efficiency of BCA. An extensive investigation is conducted to analyze the performance, convergence behavior and computational complexity of BCA. The comparative study and statistical test analysis demonstrate that BCA offers very competitive and statistically significant results compared to other eminent meta-heuristic algorithms. Experimental results also show the consistent performance of BCA in high dimensional search spaces. Furthermore, we demonstrate the applicability of BCA on real-world applications by solving several real-life engineering problems.


2019 ◽  
Vol 3 (2) ◽  
pp. 11-18
Author(s):  
George Mweshi

Extracting useful and novel information from the large amount of collected data has become a necessity for corporations wishing to maintain a competitive advantage. One of the biggest issues in handling these significantly large datasets is the curse of dimensionality. As the dimension of the data increases, the performance of the data mining algorithms employed to mine the data deteriorates. This deterioration is mainly caused by the large search space created as a result of having irrelevant, noisy and redundant features in the data. Feature selection is one of the various techniques that can be used to remove these unnecessary features. Feature selection consequently reduces the dimension of the data as well as the search space which in turn increases the efficiency and the accuracy of the mining algorithms. In this paper, we investigate the ability of Genetic Programming (GP), an evolutionary algorithm searching strategy capable of automatically finding solutions in complex and large search spaces, to perform feature selection. We implement a basic GP algorithm and perform feature selection on 5 benchmark classification datasets from UCI repository. To test the competitiveness and feasibility of the GP approach, we examine the classification performance of four classifiers namely J48, Naives Bayes, PART, and Random Forests using the GP selected features, all the original features and the features selected by the other commonly used feature selection techniques i.e. principal component analysis, information gain, relief-f and cfs. The experimental results show that not only does GP select a smaller set of features from the original features, classifiers using GP selected features achieve a better classification performance than using all the original features. Furthermore, compared to the other well-known feature selection techniques, GP achieves very competitive results.


Author(s):  
Kalev Kask ◽  
Bobak Pezeshki ◽  
Filjor Broka ◽  
Alexander Ihler ◽  
Rina Dechter

Abstraction Sampling (AS) is a recently introduced enhancement of Importance Sampling that exploits stratification by using a notion of abstractions: groupings of similar nodes into abstract states. It was previously shown that AS performs particularly well when sampling over an AND/OR search space; however, existing schemes were limited to ``proper'' abstractions in order to ensure unbiasedness, severely hindering scalability. In this paper, we introduce AOAS, a new Abstraction Sampling scheme on AND/OR search spaces that allow more flexible use of abstractions by circumventing the properness requirement. We analyze the properties of this new algorithm and, in an extensive empirical evaluation on five benchmarks, over 480 problems, and comparing against other state of the art algorithms, illustrate AOAS's properties and show that it provides a far more powerful and competitive Abstraction Sampling framework.


Author(s):  
Xiaotong Lu ◽  
Han Huang ◽  
Weisheng Dong ◽  
Xin Li ◽  
Guangming Shi

Network pruning has been proposed as a remedy for alleviating the over-parameterization problem of deep neural networks. However, its value has been recently challenged especially from the perspective of neural architecture search (NAS). We challenge the conventional wisdom of pruning-after-training by proposing a joint search-and-training approach that directly learns a compact network from the scratch. By treating pruning as a search strategy, we present two new insights in this paper: 1) it is possible to expand the search space of networking pruning by associating each filter with a learnable weight; 2) joint search-and-training can be conducted iteratively to maximize the learning efficiency. More specifically, we propose a coarse-to-fine tuning strategy to iteratively sample and update compact sub-network to approximate the target network. The weights associated with network filters will be accordingly updated by joint search-and-training to reflect learned knowledge in NAS space. Moreover, we introduce strategies of random perturbation (inspired by Monte Carlo) and flexible thresholding (inspired by Reinforcement Learning) to adjust the weight and size of each layer. Extensive experiments on ResNet and VGGNet demonstrate the superior performance of our proposed method on popular datasets including CIFAR10, CIFAR100 and ImageNet.


2011 ◽  
Vol 328-330 ◽  
pp. 1881-1886
Author(s):  
Cen Zeng ◽  
Qiang Zhang ◽  
Xiao Peng Wei

Genetic algorithm (GA), a kind of global and probabilistic optimization algorithms with high performance, have been paid broad attentions by researchers world wide and plentiful achievements have been made.This paper presents a algorithm to develop the path planning into a given search space using GA in the order of full-area coverage and the obstacle avoiding automatically. Specific genetic operators (such as selection, crossover, mutation) are introduced, and especially the handling of exceptional situations is described in detail. After that, an active genetic algorithm is introduced which allows to overcome the drawbacks of the earlier version of Full-area coverage path planning algorithms.The comparison between some of the well-known algorithms and genetic algorithm is demonstrated in this paper. our path-planning genetic algorithm yields the best performance on the flexibility and the coverage. This meets the needs of polygon obstacles. For full-area coverage path-planning, a genotype that is able to address the more complicated search spaces.


2016 ◽  
pp. 762-793
Author(s):  
Fatai Anifowose ◽  
Jane Labadin ◽  
Abdulazeez Abdulraheem

Artificial Neural Networks (ANN) have been widely applied in petroleum reservoir characterization. Despite their wide use, they are very unstable in terms of performance. Ensemble machine learning is capable of improving the performance of such unstable techniques. One of the challenges of using ANN is choosing the appropriate number of hidden neurons. Previous studies have proposed ANN ensemble models with a maximum of 50 hidden neurons in the search space thereby leaving rooms for further improvement. This chapter presents extended versions of those studies with increased search spaces using a linear search and randomized assignment of the number of hidden neurons. Using standard model evaluation criteria and novel ensemble combination rules, the results of this study suggest that having a large number of “unbiased” randomized guesses of the number of hidden neurons beyond 50 performs better than very few occurrences of those that were optimally determined.


2015 ◽  
Vol 23 (1) ◽  
pp. 101-129 ◽  
Author(s):  
Antonios Liapis ◽  
Georgios N. Yannakakis ◽  
Julian Togelius

Novelty search is a recent algorithm geared toward exploring search spaces without regard to objectives. When the presence of constraints divides a search space into feasible space and infeasible space, interesting implications arise regarding how novelty search explores such spaces. This paper elaborates on the problem of constrained novelty search and proposes two novelty search algorithms which search within both the feasible and the infeasible space. Inspired by the FI-2pop genetic algorithm, both algorithms maintain and evolve two separate populations, one with feasible and one with infeasible individuals, while each population can use its own selection method. The proposed algorithms are applied to the problem of generating diverse but playable game levels, which is representative of the larger problem of procedural game content generation. Results show that the two-population constrained novelty search methods can create, under certain conditions, larger and more diverse sets of feasible game levels than current methods of novelty search, whether constrained or unconstrained. However, the best algorithm is contingent on the particularities of the search space and the genetic operators used. Additionally, the proposed enhancement of offspring boosting is shown to enhance performance in all cases of two-population novelty search.


Author(s):  
Cheng-Hung Chen ◽  
Marco P. Schoen ◽  
Ken W. Bosworth

A novel Condensed Hybrid Optimization (CHO) algorithm using Enhanced Continuous Tabu Search (ECTS) and Particle Swarm Optimization (PSO) is proposed. The proposed CHO algorithm combines the respective strengths of ECTS and PSO. The ECTS is a modified Tabu Search (TS), which has good search capabilities on large search spaces. In this study, ECTS is utilized to define smaller search spaces, which are used in a second stage by the basic PSO to find the respective local optimum. The ECTS covers the global search space by using a TS concept called diversification and then selects the most promising areas in the search space. Once the promising regions in the search space are defined, the proposed CHO algorithm employs another TS concept called intensification in order to search the promising area thoroughly. The proposed CHO algorithm is tested with the multi-dimensional Hyperbolic and Rosenbrock problems. Compared to other four algorithms, the simulations results indicate that the accuracy and effectiveness of the proposed CHO algorithm.


Sign in / Sign up

Export Citation Format

Share Document