scholarly journals Machine learning effective models for quantum systems

2020 ◽  
Vol 101 (24) ◽  
Author(s):  
Jonas B. Rigo ◽  
Andrew K. Mitchell
2021 ◽  
Vol 42 (7) ◽  
pp. 1622-1629
Author(s):  
I. I. Yusipov ◽  
V. D. Volokitin ◽  
A. V. Liniov ◽  
M. V. Ivanchenko ◽  
I. B. Meyerov ◽  
...  

Author(s):  
Lorenzo Barberis Canonico ◽  
Nathan J. McNeese ◽  
Chris Duncan

Internet technologies have created unprecedented opportunities for people to come together and through their collective effort generate large amounts of data about human behavior. With the increased popularity of grounded theory, many researchers have sought to use ever-increasingly large datasets to analyze and draw patterns about social dynamics. However, the data is simply too big to enable a single human to derive effective models for many complex social phenomena. Computational methods offer a unique opportunity to analyze a wide spectrum of sociological events by leveraging the power of artificial intelligence. Within the human factors community, machine learning has emerged as the dominant AI-approach to deal with big data. However, along with its many benefits, machine learning has introduced a unique challenge: interpretability. The models of macro-social behavior generated by AI are so complex that rarely can they translated into human understanding. We propose a new method to conduct grounded theory research by leveraging the power of machine learning to analyze complex social phenomena through social network analysis while retaining interpretability as a core feature.


2018 ◽  
Vol 115 (52) ◽  
pp. 13216-13221 ◽  
Author(s):  
Bryce M. Henson ◽  
Dong K. Shin ◽  
Kieran F. Thomas ◽  
Jacob A. Ross ◽  
Michael R. Hush ◽  
...  

The control and manipulation of quantum systems without excitation are challenging, due to the complexities in fully modeling such systems accurately and the difficulties in controlling these inherently fragile systems experimentally. For example, while protocols to decompress Bose–Einstein condensates (BECs) faster than the adiabatic timescale (without excitation or loss) have been well developed theoretically, experimental implementations of these protocols have yet to reach speeds faster than the adiabatic timescale. In this work, we experimentally demonstrate an alternative approach based on a machine-learning algorithm which makes progress toward this goal. The algorithm is given control of the coupled decompression and transport of a metastable helium condensate, with its performance determined after each experimental iteration by measuring the excitations of the resultant BEC. After each iteration the algorithm adjusts its internal model of the system to create an improved control output for the next iteration. Given sufficient control over the decompression, the algorithm converges to a solution that sets the current speed record in relation to the adiabatic timescale, beating out other experimental realizations based on theoretical approaches. This method presents a feasible approach for implementing fast-state preparations or transformations in other quantum systems, without requiring a solution to a theoretical model of the system. Implications for fundamental physics and cooling are discussed.


2020 ◽  
Vol 34 (20) ◽  
pp. 2050196
Author(s):  
Haozhen Situ ◽  
Zhimin He

Machine learning techniques can help to represent and solve quantum systems. Learning measurement outcome distribution of quantum ansatz is useful for characterization of near-term quantum computing devices. In this work, we use the popular unsupervised machine learning model, variational autoencoder (VAE), to reconstruct the measurement outcome distribution of quantum ansatz. The number of parameters in the VAE are compared with the number of measurement outcomes. The numerical results show that VAE can efficiently learn the measurement outcome distribution with few parameters. The influence of entanglement on the task is also revealed.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Takeru Kusumoto ◽  
Kosuke Mitarai ◽  
Keisuke Fujii ◽  
Masahiro Kitagawa ◽  
Makoto Negoro

AbstractThe kernel trick allows us to employ high-dimensional feature space for a machine learning task without explicitly storing features. Recently, the idea of utilizing quantum systems for computing kernel functions using interference has been demonstrated experimentally. However, the dimension of feature spaces in those experiments have been smaller than the number of data, which makes them lose their computational advantage over explicit method. Here we show the first experimental demonstration of a quantum kernel machine that achieves a scheme where the dimension of feature space greatly exceeds the number of data using 1H nuclear spins in solid. The use of NMR allows us to obtain the kernel values with single-shot experiment. We employ engineered dynamics correlating 25 spins which is equivalent to using a feature space with a dimension over 1015. This work presents a quantum machine learning using one of the largest quantum systems to date.


2020 ◽  
Vol 22 (40) ◽  
pp. 22889-22899
Author(s):  
Xian Wang ◽  
Anshuman Kumar ◽  
Christian R. Shelton ◽  
Bryan M. Wong

Deep neural networks are a cost-effective machine-learning approach for solving the inverse problem of constructing electromagnetic fields that enable desired transitions in quantum systems.


2021 ◽  
Author(s):  
Yashpal Ramakrishnaiah ◽  
Levin Kuhlmann ◽  
Sonika Tyagi

AbstractMotivationLncRNAs are much more versatile and are involved in many regulatory roles inside the cell than previously believed. Existing databases lack consistencies in lncRNA annotations, and the functionality of over 95% of the known lncRNAs are yet to be established. LncRNA transcript identification involves discriminating them from their coding counterparts, which can be done with traditional experimental approaches, or via in silico methods. The later approach employs various computational algorithms, including machine learning classifiers to predict the lncRNA forming potential of a given transcript. Such approaches provide an economical and faster alternative to the experimental methods. Current in silico methods mainly use primary-sequence based features to build predictive models limiting their accuracy and robustness. Moreover, many of these tools make use of reference genome based features, in consequence making them unsuitable for non-model species. Hence, there is a need to comprehensively evaluate the efficacy of different predictive features to build computational models. Additionally, effective models will have to provide maximum prediction performance using the least number of features in a species-agnostic manner.It is popularly known in the protein world that “structure is function”. This also applies to lncRNAs as their functional mechanisms are similar to those of proteins. Generally, lncRNA function by structurally binding to its target proteins or nucleic acid forming complexes. The secondary structures of the lncRNAs are modular providing interaction sites for their interactome made of DNA, RNA, and proteins. Through these interactions, they epigenetically regulate cellular biology, thereby forming a layer of genomic programming on top of the coding genes. We demonstrate that in addition to using transcript sequence, we can provide comprehensive functional annotation by collating their interactome and secondary structure information.ResultsHere, we evaluated an exhaustive list of sequence-based, secondary-structure, interactome, and physicochemical features for their ability to predict the lncRNA potential of a transcript. Based on our analysis, we built different machine learning models using optimum feature-set. We found our model to be on par or exceeding the execution of the state-of-the-art methods with AUC values of over 0.9 for a diverse collection of species tested. Finally, we built a pipeline called linc2function that provides the information necessary to functionally annotate a lncRNA conveniently in a single window.AvailabilityThe source code is accessible use under MIT license in standalone mode, and as a webserver (https://bioinformaticslab.erc.monash.edu/linc2function).


2020 ◽  
Vol 10 (19) ◽  
pp. 7009
Author(s):  
Jiyeon Kim ◽  
Minsun Shim ◽  
Seungah Hong ◽  
Yulim Shin ◽  
Eunjung Choi

As the number of Internet of Things (IoT) devices connected to the network rapidly increases, network attacks such as flooding and Denial of Service (DoS) are also increasing. These attacks cause network disruption and denial of service to IoT devices. However, a large number of heterogenous devices deployed in the IoT environment make it difficult to detect IoT attacks using traditional rule-based security solutions. It is challenging to develop optimal security models for each type of the device. Machine learning (ML) is an alternative technique that allows one to develop optimal security models based on empirical data from each device. We employ the ML technique for IoT attack detection. We focus on botnet attacks targeting various IoT devices and develop ML-based models for each type of device. We use the N-BaIoT dataset generated by injecting botnet attacks (Bashlite and Mirai) into various types of IoT devices, including a Doorbell, Baby Monitor, Security Camera, and Webcam. We develop a botnet detection model for each device using numerous ML models, including deep learning (DL) models. We then analyze the effective models with a high detection F1-score by carrying out multiclass classification, as well as binary classification, for each model.


2019 ◽  
Vol 119 (23) ◽  
Author(s):  
Jean Michel Sellier ◽  
Kristina G. Kapanova ◽  
Jacob Leygonie ◽  
Gaetan Marceau Caron

Sign in / Sign up

Export Citation Format

Share Document