scholarly journals Quantum generative adversarial learning in a superconducting quantum circuit

2019 ◽  
Vol 5 (1) ◽  
pp. eaav2761 ◽  
Author(s):  
Ling Hu ◽  
Shu-Hao Wu ◽  
Weizhou Cai ◽  
Yuwei Ma ◽  
Xianghao Mu ◽  
...  

Generative adversarial learning is one of the most exciting recent breakthroughs in machine learning. It has shown splendid performance in a variety of challenging tasks such as image and video generation. More recently, a quantum version of generative adversarial learning has been theoretically proposed and shown to have the potential of exhibiting an exponential advantage over its classical counterpart. Here, we report the first proof-of-principle experimental demonstration of quantum generative adversarial learning in a superconducting quantum circuit. We demonstrate that, after several rounds of adversarial learning, a quantum-state generator can be trained to replicate the statistics of the quantum data output from a quantum channel simulator, with a high fidelity (98.8% on average) so that the discriminator cannot distinguish between the true and the generated data. Our results pave the way for experimentally exploring the intriguing long-sought-after quantum advantages in machine learning tasks with noisy intermediate–scale quantum devices.


2020 ◽  
Vol 34 (04) ◽  
pp. 3954-3961
Author(s):  
Zhi Gao ◽  
Yuwei Wu ◽  
Xiaoxun Zhang ◽  
Jindou Dai ◽  
Yunde Jia ◽  
...  

Bilinear pooling has achieved state-of-the-art performance on fusing features in various machine learning tasks, owning to its ability to capture complex associations between features. Despite the success, bilinear pooling suffers from redundancy and burstiness issues, mainly due to the rank-one property of the resulting representation. In this paper, we prove that bilinear pooling is indeed a similarity-based coding-pooling formulation. This establishment then enables us to devise a new feature fusion algorithm, the factorized bilinear coding (FBC) method, to overcome the drawbacks of the bilinear pooling. We show that FBC can generate compact and discriminative representations with substantially fewer parameters. Experiments on two challenging tasks, namely image classification and visual question answering, demonstrate that our method surpasses the bilinear pooling technique by a large margin.



2020 ◽  
Vol 3 (1) ◽  
Author(s):  
H. Chen ◽  
L. Wossnig ◽  
S. Severini ◽  
H. Neven ◽  
M. Mohseni

AbstractRecent results have demonstrated the successful applications of quantum-classical hybrid methods to train quantum circuits for a variety of machine learning tasks. A natural question to ask is consequentially whether we can also train such quantum circuits to discriminate quantum data, i.e., perform classification on data stored in form of quantum states. Although quantum mechanics fundamentally forbids deterministic discrimination of non-orthogonal states, we show in this work that it is possible to train a quantum circuit to discriminate such data with a trade-off between minimizing error rates and inconclusiveness rates of the classification tasks. Our approach achieves at the same time a performance which is close to the theoretically optimal values and a generalization ability to previously unseen quantum data. This generalization power hence distinguishes our work from previous circuit optimization results and furthermore provides an example of a quantum machine learning task that has inherently no classical analogue.



2021 ◽  
Vol 69 (10) ◽  
pp. 903-914
Author(s):  
Florian Buckermann ◽  
Nils Klement ◽  
Oliver Beyer ◽  
Andreas Hütten ◽  
Barbara Hammer

Abstract The automation of quality control in manufacturing has made great strides in recent years, in particular following new developments in machine learning, specifically deep learning, which allow to solve challenging tasks such as visual inspection or quality prediction. Yet, optimum quality control pipelines are often not obvious in specific settings, since they do not necessarily align with (supervised) machine learning tasks. In this contribution, we introduce a new automation pipeline for the quantification of wear on electrical contact pins. More specifically, we propose and test a novel pipeline which combines a deep network for image segmentation with geometric priors of the problem. This task is important for a judgement of the quality of the material and it can serve as a starting point to optimize the choices of materials based on its automated evaluation.



Author(s):  
Joseph D. Romano ◽  
Trang T. Le ◽  
Weixuan Fu ◽  
Jason H. Moore

AbstractAutomated machine learning (AutoML) and artificial neural networks (ANNs) have revolutionized the field of artificial intelligence by yielding incredibly high-performing models to solve a myriad of inductive learning tasks. In spite of their successes, little guidance exists on when to use one versus the other. Furthermore, relatively few tools exist that allow the integration of both AutoML and ANNs in the same analysis to yield results combining both of their strengths. Here, we present TPOT-NN—a new extension to the tree-based AutoML software TPOT—and use it to explore the behavior of automated machine learning augmented with neural network estimators (AutoML+NN), particularly when compared to non-NN AutoML in the context of simple binary classification on a number of public benchmark datasets. Our observations suggest that TPOT-NN is an effective tool that achieves greater classification accuracy than standard tree-based AutoML on some datasets, with no loss in accuracy on others. We also provide preliminary guidelines for performing AutoML+NN analyses, and recommend possible future directions for AutoML+NN methods research, especially in the context of TPOT.



Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 39
Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

Deep Learning (DL) has attracted a lot of attention for its ability to reach state-of-the-art performance in many machine learning tasks. The core principle of DL methods consists of training composite architectures in an end-to-end fashion, where inputs are associated with outputs trained to optimize an objective function. Because of their compositional nature, DL architectures naturally exhibit several intermediate representations of the inputs, which belong to so-called latent spaces. When treated individually, these intermediate representations are most of the time unconstrained during the learning process, as it is unclear which properties should be favored. However, when processing a batch of inputs concurrently, the corresponding set of intermediate representations exhibit relations (what we call a geometry) on which desired properties can be sought. In this work, we show that it is possible to introduce constraints on these latent geometries to address various problems. In more detail, we propose to represent geometries by constructing similarity graphs from the intermediate representations obtained when processing a batch of inputs. By constraining these Latent Geometry Graphs (LGGs), we address the three following problems: (i) reproducing the behavior of a teacher architecture is achieved by mimicking its geometry, (ii) designing efficient embeddings for classification is achieved by targeting specific geometries, and (iii) robustness to deviations on inputs is achieved via enforcing smooth variation of geometry between consecutive latent spaces. Using standard vision benchmarks, we demonstrate the ability of the proposed geometry-based methods in solving the considered problems.



2021 ◽  
pp. 1-12
Author(s):  
Melesio Crespo-Sanchez ◽  
Ivan Lopez-Arevalo ◽  
Edwin Aldana-Bobadilla ◽  
Alejandro Molina-Villegas

In the last few years, text analysis has grown as a keystone in several domains for solving many real-world problems, such as machine translation, spam detection, and question answering, to mention a few. Many of these tasks can be approached by means of machine learning algorithms. Most of these algorithms take as input a transformation of the text in the form of feature vectors containing an abstraction of the content. Most of recent vector representations focus on the semantic component of text, however, we consider that also taking into account the lexical and syntactic components the abstraction of content could be beneficial for learning tasks. In this work, we propose a content spectral-based text representation applicable to machine learning algorithms for text analysis. This representation integrates the spectra from the lexical, syntactic, and semantic components of text producing an abstract image, which can also be treated by both, text and image learning algorithms. These components came from feature vectors of text. For demonstrating the goodness of our proposal, this was tested on text classification and complexity reading score prediction tasks obtaining promising results.



2021 ◽  
Vol 39 (15_suppl) ◽  
pp. e13588-e13588
Author(s):  
Laura Sachse ◽  
Smriti Dasari ◽  
Marc Ackermann ◽  
Emily Patnaude ◽  
Stephanie OLeary ◽  
...  

e13588 Background: Pre-screening for clinical trials is becoming more challenging as inclusion/exclusion criteria becomes increasingly complex. Oncology precision medicine provides an exciting opportunity to simplify this process and quickly match patients with trials by leveraging machine learning technology. The Tempus TIME Trial site network matches patients to relevant, open, and recruiting clinical trials, personalized to each patient’s clinical and molecular biology. Methods: Tempus screens patients at sites within the TIME Trial Network to find high-fidelity matches to clinical trials. The patient records include documentation submitted alongside NGS orders as well as electronic medical records (EMR) ingested through EMR Integrations. While Tempus-sequenced patients were automatically matched to trials using a Tempus-built matching application, EMR records were run through a natural language processing (NLP) data abstraction model to identify patients with an actionable gene of interest. Structured data were analyzed to filter to patients that lack a deceased date and have an encounter date within a predefined time period. Tempus abstractors manually validated the resulting unstructured records to ensure each patient was matched to a TIME Trial at a site capable of running the trial. For all high-level patient matches, a Tempus Clinical Navigator manually evaluated other clinical criteria to confirm trial matches and communicated with the site about trial options. Results: Patient matching was accelerated by implementing NLP gene and report detection (which isolated 17% of records) and manual screening. As a result, Tempus facilitated screening of over 190,000 patients efficiently using proprietary NLP technology to match 332 patients to 21 unique interventional clinical trials since program launch. Tempus continues to optimize its NLP models to increase high-fidelity trial matching at scale. Conclusions: The TIME Trial Network is an evolving, dynamic program that efficiently matches patients with clinical trial sites using both EMR and Tempus sequencing data. Here, we show how machine learning technology can be utilized to efficiently identify and recruit patients to clinical trials, thereby personalizing trial enrollment for each patient.[Table: see text]



2021 ◽  
Vol 6 (22) ◽  
pp. 51-59
Author(s):  
Mustazzihim Suhaidi ◽  
Rabiah Abdul Kadir ◽  
Sabrina Tiun

Extracting features from input data is vital for successful classification and machine learning tasks. Classification is the process of declaring an object into one of the predefined categories. Many different feature selection and feature extraction methods exist, and they are being widely used. Feature extraction, obviously, is a transformation of large input data into a low dimensional feature vector, which is an input to classification or a machine learning algorithm. The task of feature extraction has major challenges, which will be discussed in this paper. The challenge is to learn and extract knowledge from text datasets to make correct decisions. The objective of this paper is to give an overview of methods used in feature extraction for various applications, with a dataset containing a collection of texts taken from social media.



Author(s):  
Himel Das Gupta ◽  
Kun Zhang ◽  
Victor S. Sheng

Deep neural network (DNN) has shown significant improvement in learning and generalizing different machine learning tasks over the years. But it comes with an expense of heavy computational power and memory requirements. We can see that machine learning applications are even running in portable devices like mobiles and embedded systems nowadays, which generally have limited resources regarding computational power and memory and thus can only run small machine learning models. However, smaller networks usually do not perform very well. In this paper, we have implemented a simple ensemble learning based knowledge distillation network to improve the accuracy of such small models. Our experimental results prove that the performance enhancement of smaller models can be achieved through distilling knowledge from a combination of small models rather than using a cumbersome model for the knowledge transfer. Besides, the ensemble knowledge distillation network is simpler, time-efficient, and easy to implement.



Sign in / Sign up

Export Citation Format

Share Document