Object Detection using Feature Mining in a Distributed Machine Learning Framework

2017 ◽  
Author(s):  
Arne Ehlers

This dissertation addresses the problem of visual object detection based on machine-learned classifiers. A distributed machine learning framework is developed to learn detectors for several object classes creating cascaded ensemble classifiers by the Adaptive Boosting algorithm. Methods are proposed that enhance several components of an object detection framework: At first, the thesis deals with augmenting the training data in order to improve the performance of object detectors learned from sparse training sets. Secondly, feature mining strategies are introduced to create feature sets that are customized to the object class to be detected. Furthermore, a novel class of fractal features is proposed that allows to represent a wide variety of shapes. Thirdly, a method is introduced that models and combines internal confidences and uncertainties of the cascaded detector using Dempster’s theory of evidence in order to increase the quality of the post-processing. ...

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Justin Y. Lee ◽  
Britney Nguyen ◽  
Carlos Orosco ◽  
Mark P. Styczynski

Abstract Background The topology of metabolic networks is both well-studied and remarkably well-conserved across many species. The regulation of these networks, however, is much more poorly characterized, though it is known to be divergent across organisms—two characteristics that make it difficult to model metabolic networks accurately. While many computational methods have been built to unravel transcriptional regulation, there have been few approaches developed for systems-scale analysis and study of metabolic regulation. Here, we present a stepwise machine learning framework that applies established algorithms to identify regulatory interactions in metabolic systems based on metabolic data: stepwise classification of unknown regulation, or SCOUR. Results We evaluated our framework on both noiseless and noisy data, using several models of varying sizes and topologies to show that our approach is generalizable. We found that, when testing on data under the most realistic conditions (low sampling frequency and high noise), SCOUR could identify reaction fluxes controlled only by the concentration of a single metabolite (its primary substrate) with high accuracy. The positive predictive value (PPV) for identifying reactions controlled by the concentration of two metabolites ranged from 32 to 88% for noiseless data, 9.2 to 49% for either low sampling frequency/low noise or high sampling frequency/high noise data, and 6.6–27% for low sampling frequency/high noise data, with results typically sufficiently high for lab validation to be a practical endeavor. While the PPVs for reactions controlled by three metabolites were lower, they were still in most cases significantly better than random classification. Conclusions SCOUR uses a novel approach to synthetically generate the training data needed to identify regulators of reaction fluxes in a given metabolic system, enabling metabolomics and fluxomics data to be leveraged for regulatory structure inference. By identifying and triaging the most likely candidate regulatory interactions, SCOUR can drastically reduce the amount of time needed to identify and experimentally validate metabolic regulatory interactions. As high-throughput experimental methods for testing these interactions are further developed, SCOUR will provide critical impact in the development of predictive metabolic models in new organisms and pathways.


2020 ◽  
Vol 15 (1) ◽  
Author(s):  
Lihong Huang ◽  
Canqiang Xu ◽  
Wenxian Yang ◽  
Rongshan Yu

Abstract Background Studies on metagenomic data of environmental microbial samples found that microbial communities seem to be geolocation-specific, and the microbiome abundance profile can be a differentiating feature to identify samples’ geolocations. In this paper, we present a machine learning framework to determine the geolocations from metagenomics profiling of microbial samples. Results Our method was applied to the multi-source microbiome data from MetaSUB (The Metagenomics and Metadesign of Subways and Urban Biomes) International Consortium for the CAMDA 2019 Metagenomic Forensics Challenge (the Challenge). The goal of the Challenge is to predict the geographical origins of mystery samples by constructing microbiome fingerprints.First, we extracted features from metagenomic abundance profiles. We then randomly split the training data into training and validation sets and trained the prediction models on the training set. Prediction performance was evaluated on the validation set. By using logistic regression with L2 normalization, the prediction accuracy of the model reaches 86%, averaged over 100 random splits of training and validation datasets.The testing data consists of samples from cities that do not occur in the training data. To predict the “mystery” cities that are not sampled before for the testing data, we first defined biological coordinates for sampled cities based on the similarity of microbial samples from them. Then we performed affine transform on the map such that the distance between cities measures their biological difference rather than geographical distance. After that, we derived the probabilities of a given testing sample from unsampled cities based on its predicted probabilities on sampled cities using Kriging interpolation. Results show that this method can successfully assign high probabilities to the true cities-of-origin of testing samples. Conclusion Our framework shows good performance in predicting the geographic origin of metagenomic samples for cities where training data are available. Furthermore, we demonstrate the potential of the proposed method to predict metagenomic samples’ geolocations for samples from locations that are not in the training dataset.


2021 ◽  
Vol 87 (11) ◽  
pp. 841-852
Author(s):  
S. Boukir ◽  
L. Guo ◽  
N. Chehata

In this article, margin theory is exploited to design better ensemble classifiers for remote sensing data. A semi-supervised version of the ensemble margin is at the core of this work. Some major challenges in ensemble learning are investigated using this paradigm in the difficult context of land cover classification: selecting the most informative instances to form an appropriate training set, and selecting the best ensemble members. The main contribution of this work lies in the explicit use of the ensemble margin as a decision method to select training data and base classifiers in an ensemble learning framework. The selection of training data is achieved through an innovative iterative guided bagging algorithm exploiting low-margin instances. The overall classification accuracy is improved by up to 3%, with more dramatic improvement in per-class accuracy (up to 12%). The selection of ensemble base classifiers is achieved by an ordering-based ensemble-selection algorithm relying on an original margin-based criterion that also targets low-margin instances. This method reduces the complexity (ensemble size under 30) but maintains performance.


Author(s):  
Maroun Touma ◽  
Shalisha Witherspoon ◽  
Shonda Witherspoon ◽  
Isabelle Crawford-Eng

With the increasing deployment of smart buildings and infrastructure, Supervisory Control and Data Acquisition (SCADA) devices and the underlying IT network have become essential elements for the proper operations of these highly complex systems. Of course, with the increase in automation and the proliferation of SCADA devices, a corresponding increase in surface area of attack on critical infrastructure has increased. Understanding device behaviors in terms of known and understood or potentially qualified activities versus unknown and potentially nefarious activities in near-real time is a key component of any security solution. In this paper, we investigate the challenges with building robust machine learning models to identify unknowns purely from network traffic both inside and outside firewalls, starting with missing or inconsistent labels across sites, feature engineering and learning, temporal dependencies and analysis, and training data quality (including small sample sizes) for both shallow and deep learning methods. To demonstrate these challenges and the capabilities we have developed, we focus on Building Automation and Control networks (BACnet) from a private commercial building system. Our results show that ”Model Zoo” built from binary classifiers based on each device or behavior combined with an ensemble classifier integrating information from all classifiers provides a reliable methodology to identify unknown devices as well as determining specific known devices when the device type is in the training set. The capability of the Model Zoo framework is shown to be directly linked to feature engineering and learning, and the dependency of the feature selection varies depending on both the binary and ensemble classifiers as well.


Author(s):  
Zhihui Li ◽  
Lina Yao ◽  
Xiaoqin Zhang ◽  
Xianzhi Wang ◽  
Salil Kanhere ◽  
...  

Object detection is important in real-world applications. Existing methods mainly focus on object detection with sufficient labelled training data or zero-shot object detection with only concept names. In this paper, we address the challenging problem of zero-shot object detection with natural language description, which aims to simultaneously detect and recognize novel concept instances with textual descriptions. We propose a novel deep learning framework to jointly learn visual units, visual-unit attention and word-level attention, which are combined to achieve word-proposal affinity by an element-wise multiplication. To the best of our knowledge, this is the first work on zero-shot object detection with textual descriptions. Since there is no directly related work in the literature, we investigate plausible solutions based on existing zero-shot object detection for a fair comparison. We conduct extensive experiments on three challenging benchmark datasets. The extensive experimental results confirm the superiority of the proposed model.


2019 ◽  
Vol 11 (7) ◽  
pp. 794 ◽  
Author(s):  
Karsten Lambers ◽  
Wouter Verschoof-van der Vaart ◽  
Quentin Bourgeois

Although the history of automated archaeological object detection in remotely sensed data is short, progress and emerging trends are evident. Among them, the shift from rule-based approaches towards machine learning methods is, at the moment, the cause for high expectations, even though basic problems, such as the lack of suitable archaeological training data are only beginning to be addressed. In a case study in the central Netherlands, we are currently developing novel methods for multi-class archaeological object detection in LiDAR data based on convolutional neural networks (CNNs). This research is embedded in a long-term investigation of the prehistoric landscape of our study region. We here present an innovative integrated workflow that combines machine learning approaches to automated object detection in remotely sensed data with a two-tier citizen science project that allows us to generate and validate detections of hitherto unknown archaeological objects, thereby contributing to the creation of reliable, labeled archaeological training datasets. We motivate our methodological choices in the light of current trends in archaeological prospection, remote sensing, machine learning, and citizen science, and present the first results of the implementation of the workflow in our research area.


Sign in / Sign up

Export Citation Format

Share Document