scholarly journals Hyper-Heuristic Framework for Sequential Semi-Supervised Classification Based on Core Clustering

Symmetry ◽  
2020 ◽  
Vol 12 (8) ◽  
pp. 1292
Author(s):  
Ahmed Adnan ◽  
Abdullah Muhammed ◽  
Abdul Azim Abd Ghani ◽  
Azizol Abdullah ◽  
Fahrul Hakim

Existing stream data learning models with limited labeling have many limitations, most importantly, algorithms that suffer from a limited capability to adapt to the evolving nature of data, which is called concept drift. Hence, the algorithm must overcome the problem of dynamic update in the internal parameters or countering the concept drift. However, using neural network-based semi-supervised stream data learning is not adequate due to the need for capturing quickly the changes in the distribution and characteristics of various classes of the data whilst avoiding the effect of the outdated stored knowledge in neural networks (NN). This article presents a prominent framework that integrates each of the NN, a meta-heuristic based on evolutionary genetic algorithm (GA) and a core online-offline clustering (Core). The framework trains the NN on previously labeled data and its knowledge is used to calculate the error of the core online-offline clustering block. The genetic optimization is responsible for selecting the best parameters of the core model to minimize the error. This integration aims to handle the concept drift. We designated this model as hyper-heuristic framework for semi-supervised classification or HH-F. Experimental results of the application of HH-F on real datasets prove the superiority of the proposed framework over the existing state-of-the art approaches used in the literature for sequential classification data with evolving nature.

Sensor Review ◽  
2017 ◽  
Vol 37 (3) ◽  
pp. 371-382 ◽  
Author(s):  
Qingchen Qiu ◽  
Xuelian Wu ◽  
Zhi Liu ◽  
Bo Tang ◽  
Yuefeng Zhao ◽  
...  

Purpose This paper aims to provide a framework of the supervised hyperspectral classification, to study the traditional flowchart of hyperspectral image (HIS) analysis and processing. HSI technology has been proposed for many years, and the applications of this technology were promoted by technical advancements. Design/methodology/approach First, the properties and current situation of hyperspectral technology are summarized. Then, this paper introduces a series of common classification approaches. In addition, a comparison of different classification approaches on real hyperspectral data is conducted. Finally, this survey presents a discussion on the classification results and points out the classification development tendency. Findings The core of this survey is to review of the state of the art of the classification for hyperspectral images, to study the performance and efficiency of certain implementation measures and to point out the challenges still exist. Originality value The study categorized the supervised classification for hyperspectral images, demonstrated the comparisons among these methods and pointed out the challenges that still exist.


Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 39
Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

Deep Learning (DL) has attracted a lot of attention for its ability to reach state-of-the-art performance in many machine learning tasks. The core principle of DL methods consists of training composite architectures in an end-to-end fashion, where inputs are associated with outputs trained to optimize an objective function. Because of their compositional nature, DL architectures naturally exhibit several intermediate representations of the inputs, which belong to so-called latent spaces. When treated individually, these intermediate representations are most of the time unconstrained during the learning process, as it is unclear which properties should be favored. However, when processing a batch of inputs concurrently, the corresponding set of intermediate representations exhibit relations (what we call a geometry) on which desired properties can be sought. In this work, we show that it is possible to introduce constraints on these latent geometries to address various problems. In more detail, we propose to represent geometries by constructing similarity graphs from the intermediate representations obtained when processing a batch of inputs. By constraining these Latent Geometry Graphs (LGGs), we address the three following problems: (i) reproducing the behavior of a teacher architecture is achieved by mimicking its geometry, (ii) designing efficient embeddings for classification is achieved by targeting specific geometries, and (iii) robustness to deviations on inputs is achieved via enforcing smooth variation of geometry between consecutive latent spaces. Using standard vision benchmarks, we demonstrate the ability of the proposed geometry-based methods in solving the considered problems.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 859
Author(s):  
Abdulaziz O. AlQabbany ◽  
Aqil M. Azmi

We are living in the age of big data, a majority of which is stream data. The real-time processing of this data requires careful consideration from different perspectives. Concept drift is a change in the data’s underlying distribution, a significant issue, especially when learning from data streams. It requires learners to be adaptive to dynamic changes. Random forest is an ensemble approach that is widely used in classical non-streaming settings of machine learning applications. At the same time, the Adaptive Random Forest (ARF) is a stream learning algorithm that showed promising results in terms of its accuracy and ability to deal with various types of drift. The incoming instances’ continuity allows for their binomial distribution to be approximated to a Poisson(1) distribution. In this study, we propose a mechanism to increase such streaming algorithms’ efficiency by focusing on resampling. Our measure, resampling effectiveness (ρ), fuses the two most essential aspects in online learning; accuracy and execution time. We use six different synthetic data sets, each having a different type of drift, to empirically select the parameter λ of the Poisson distribution that yields the best value for ρ. By comparing the standard ARF with its tuned variations, we show that ARF performance can be enhanced by tackling this important aspect. Finally, we present three case studies from different contexts to test our proposed enhancement method and demonstrate its effectiveness in processing large data sets: (a) Amazon customer reviews (written in English), (b) hotel reviews (in Arabic), and (c) real-time aspect-based sentiment analysis of COVID-19-related tweets in the United States during April 2020. Results indicate that our proposed method of enhancement exhibited considerable improvement in most of the situations.


2021 ◽  
Author(s):  
Anh Nguyen ◽  
Khoa Pham ◽  
Dat Ngo ◽  
Thanh Ngo ◽  
Lam Pham

This paper provides an analysis of state-of-the-art activation functions with respect to supervised classification of deep neural network. These activation functions comprise of Rectified Linear Units (ReLU), Exponential Linear Unit (ELU), Scaled Exponential Linear Unit (SELU), Gaussian Error Linear Unit (GELU), and the Inverse Square Root Linear Unit (ISRLU). To evaluate, experiments over two deep learning network architectures integrating these activation functions are conducted. The first model, basing on Multilayer Perceptron (MLP), is evaluated with MNIST dataset to perform these activation functions.Meanwhile, the second model, likely VGGish-based architecture, is applied for Acoustic Scene Classification (ASC) Task 1A in DCASE 2018 challenge, thus evaluate whether these activation functions work well in different datasets as well as different network architectures.


Author(s):  
Risheng Liu

Numerous tasks at the core of statistics, learning, and vision areas are specific cases of ill-posed inverse problems. Recently, learning-based (e.g., deep) iterative methods have been empirically shown to be useful for these problems. Nevertheless, integrating learnable structures into iterations is still a laborious process, which can only be guided by intuitions or empirical insights. Moreover, there is a lack of rigorous analysis of the convergence behaviors of these reimplemented iterations, and thus the significance of such methods is a little bit vague. We move beyond these limits and propose a theoretically guaranteed optimization learning paradigm, a generic and provable paradigm for nonconvex inverse problems, and develop a series of convergent deep models. Our theoretical analysis reveals that the proposed optimization learning paradigm allows us to generate globally convergent trajectories for learning-based iterative methods. Thanks to the superiority of our framework, we achieve state-of-the-art performance on different real applications.


Author(s):  
Wenbin Li ◽  
Lei Wang ◽  
Jing Huo ◽  
Yinghuan Shi ◽  
Yang Gao ◽  
...  

The core idea of metric-based few-shot image classification is to directly measure the relations between query images and support classes to learn transferable feature embeddings. Previous work mainly focuses on image-level feature representations, which actually cannot effectively estimate a class's distribution due to the scarcity of samples. Some recent work shows that local descriptor based representations can achieve richer representations than image-level based representations. However, such works are still based on a less effective instance-level metric, especially a symmetric metric, to measure the relation between a query image and a support class. Given the natural asymmetric relation between a query image and a support class, we argue that an asymmetric measure is more suitable for metric-based few-shot learning. To that end, we propose a novel Asymmetric Distribution Measure (ADM) network for few-shot learning by calculating a joint local and global asymmetric measure between two multivariate local distributions of a query and a class. Moreover, a task-aware Contrastive Measure Strategy (CMS) is proposed to further enhance the measure function. On popular miniImageNet and tieredImageNet, ADM can achieve the state-of-the-art results, validating our innovative design of asymmetric distribution measures for few-shot learning. The source code can be downloaded from https://github.com/WenbinLee/ADM.git.


Author(s):  
Hani Awni Hawamdeh

The world cup stadia have been a constant concern for the hosting countries. Many of them have become a burden on the economies of their countries, only to become white elephants after the tournaments end. Therefore, the core mission of the Supreme Committee for Delivery & Legacy in Qatar was to ensure that the World Cup Stadiums are built with a legacy and to remain functional in the long run, not just as facilities, but as cultural icons. Such efforts have promoted the exercise of stadia building in Qatar as a positive and unique experience. As a firm, we, at Arab Engineering Bureau, are honored to be part of the effort all through the making of Al Thumama Stadium, which will be discussed in this paper. Instead of a white elephant, Al Thumama Stadium is arguably a symbol of the local identity that will become part of the World Cup legacy, whilst being a state-of-the-art facility that plays a vital role in development of its surrounding neighborhood.


2004 ◽  
Vol 19 (1) ◽  
pp. 1-25 ◽  
Author(s):  
SARVAPALI D. RAMCHURN ◽  
DONG HUYNH ◽  
NICHOLAS R. JENNINGS

Trust is a fundamental concern in large-scale open distributed systems. It lies at the core of all interactions between the entities that have to operate in such uncertain and constantly changing environments. Given this complexity, these components, and the ensuing system, are increasingly being conceptualised, designed, and built using agent-based techniques and, to this end, this paper examines the specific role of trust in multi-agent systems. In particular, we survey the state of the art and provide an account of the main directions along which research efforts are being focused. In so doing, we critically evaluate the relative strengths and weaknesses of the main models that have been proposed and show how, fundamentally, they all seek to minimise the uncertainty in interactions. Finally, we outline the areas that require further research in order to develop a comprehensive treatment of trust in complex computational settings.


2020 ◽  
Author(s):  
Zhe Yang ◽  
Dejan Gjorgjevikj ◽  
Jian-Yu Long ◽  
Yan-Yang Zi ◽  
Shao-Hui Zhang ◽  
...  

Abstract Novelty detection is a challenging task for the machinery fault diagnosis. A novel fault diagnostic method is developed for dealing with not only diagnosing the known type of defect, but also detecting novelties, i.e. the occurrence of new types of defects which have never been recorded. To this end, a sparse autoencoder-based multi-head Deep Neural Network (DNN) is presented to jointly learn a shared encoding representation for both unsupervised reconstruction and supervised classification of the monitoring data. The detection of novelties is based on the reconstruction error. Moreover, the computational burden is reduced by directly training the multi-head DNN with rectified linear unit activation function, instead of performing the pre-training and fine-tuning phases required for classical DNNs. The addressed method is applied to a benchmark bearing case study and to experimental data acquired from a delta 3D printer. The results show that it is able to accurately diagnose known types of defects, as well as to detect unknown defects, outperforming other state-of-the-art methods.


Author(s):  
Jarne R. Verpoorten ◽  
Miche`le Auglaire ◽  
Frank Bertels

During a hypothetical Severe Accident (SA), core damage is to be expected due to insufficient core cooling. If the lack of core cooling persists, the degradation of the core can continue and could lead to the presence of corium in the lower plenum. There, the thermo-mechanical attack of the lower head by the corium could eventually lead to vessel failure and corium release to the reactor cavity pit. In this paper, it is described how the international state-of-the-art knowledge has been applied in combination with plant-specific data in order to obtain a custom Severe Accident Management (SAM) approach and hardware adaptations for existing NPPs. Also the interest of Tractebel Engineering in future SA research projects related to this topic will be addressed from the viewpoint of keeping the analysis up-to-date with the state-of-the art knowledge.


Sign in / Sign up

Export Citation Format

Share Document