scholarly journals Deep reinforcement learning for efficient measurement of quantum devices

2021 ◽  
Vol 7 (1) ◽  
Author(s):  
V. Nguyen ◽  
S. B. Orbell ◽  
D. T. Lennon ◽  
H. Moon ◽  
F. Vigneau ◽  
...  

AbstractDeep reinforcement learning is an emerging machine-learning approach that can teach a computer to learn from their actions and rewards similar to the way humans learn from experience. It offers many advantages in automating decision processes to navigate large parameter spaces. This paper proposes an approach to the efficient measurement of quantum devices based on deep reinforcement learning. We focus on double quantum dot devices, demonstrating the fully automatic identification of specific transport features called bias triangles. Measurements targeting these features are difficult to automate, since bias triangles are found in otherwise featureless regions of the parameter space. Our algorithm identifies bias triangles in a mean time of <30 min, and sometimes as little as 1 min. This approach, based on dueling deep Q-networks, can be adapted to a broad range of devices and target transport features. This is a crucial demonstration of the utility of deep reinforcement learning for decision making in the measurement and operation of quantum devices.

Author(s):  
Sara Mizar Formentin ◽  
Barbara Zanuttigh

This contribution presents a new procedure for the automatic identification of the individual overtopping events. The procedure is based on a zero-down-crossing analysis of the water-surface-elevation signals and, based on two threshold values, can be applied to any structure crest level, i.e. to emerged, zero-freeboard, over-washed and submerged conditions. The results of the procedure are characterized by a level of accuracy comparable to the human-supervised analysis of the wave signals. The procedure includes a second algorithm for the coupling of the overtopping events registered at two consecutive gauges. This coupling algorithm offers a series of original applications of practical relevance, a.o. the possibility to estimate the wave celerities, i.e. the velocities of propagation of the single waves, which could be used as an approximation of the flow velocity in shallow water and broken flow conditions.


1997 ◽  
Vol 16 (5) ◽  
pp. 610-616 ◽  
Author(s):  
L. Verard ◽  
P. Allain ◽  
J.M. Travere ◽  
J.C. Baron ◽  
D. Bloyet

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
C.-Y. Pan ◽  
M. Hao ◽  
N. Barraza ◽  
E. Solano ◽  
F. Albarrán-Arriagada

AbstractThe characterization of observables, expressed via Hermitian operators, is a crucial task in quantum mechanics. For this reason, an eigensolver is a fundamental algorithm for any quantum technology. In this work, we implement a semi-autonomous algorithm to obtain an approximation of the eigenvectors of an arbitrary Hermitian operator using the IBM quantum computer. To this end, we only use single-shot measurements and pseudo-random changes handled by a feedback loop, reducing the number of measures in the system. Due to the classical feedback loop, this algorithm can be cast into the reinforcement learning paradigm. Using this algorithm, for a single-qubit observable, we obtain both eigenvectors with fidelities over 0.97 with around 200 single-shot measurements. For two-qubits observables, we get fidelities over 0.91 with around 1500 single-shot measurements for the four eigenvectors, which is a comparatively low resource demand, suitable for current devices. This work is useful to the development of quantum devices able to decide with partial information, which helps to implement future technologies in quantum artificial intelligence.


2021 ◽  
Vol 11 (24) ◽  
pp. 11710
Author(s):  
Matteo Miani ◽  
Matteo Dunnhofer ◽  
Fabio Rondinella ◽  
Evangelos Manthos ◽  
Jan Valentin ◽  
...  

This study introduces a machine learning approach based on Artificial Neural Networks (ANNs) for the prediction of Marshall test results, stiffness modulus and air voids data of different bituminous mixtures for road pavements. A novel approach for an objective and semi-automatic identification of the optimal ANN’s structure, defined by the so-called hyperparameters, has been introduced and discussed. Mechanical and volumetric data were obtained by conducting laboratory tests on 320 Marshall specimens, and the results were used to train the neural network. The k-fold Cross Validation method has been used for partitioning the available data set, to obtain an unbiased evaluation of the model predictive error. The ANN’s hyperparameters have been optimized using the Bayesian optimization, that overcame efficiently the more costly trial-and-error procedure and automated the hyperparameters tuning. The proposed ANN model is characterized by a Pearson coefficient value of 0.868.


Author(s):  
J S Schofield ◽  
D J Wright

In recent decades the UK has made significant advances in its approach to, and its results from, the management of naval platform vulnerability. This paper explores the history, guiding principles and assessment techniques of successful vulnerability management. World War II lessons learned are reviewed and shown to be still relevant today. These include structural and systems design features for the management of blast and fragmentation. Requirements must be set which are realistic and contractual. Through the design of several classes of ship using current vulnerability management principles it is now clear what can be achieved. Therefore realistic requirements can be effectively set. Quantitative vulnerability assessment is a key part of the design process, from the earliest concept to build and beyond. It is never too early to consider vulnerability, as the biggest gains can be made for the least cost during the early concept phases. However, early promise can be compromised by careless addition of supporting systems and services, so continuous monitoring is required. In order for vulnerability assessments to keep pace with and guide the direction of the developing design, efficient assessment tools are needed. If the model takes too long to build, the tool offers purely an audit function, rather than being a design aid. Such a tool is also an important input to Operational Analysis of the in-service fleet. As such, very large parameter spaces of results are needed, for the full threat spectrum against the whole fleet in a range of scenarios. SCL has developed the Purple Fire tool to facilitate the sorts of assessment required for modern platform designs, weapon programmes and operational analysis in support of the fleet. It provides the analyst with the ability to construct platform representations very quickly, meaning less model build time and more analysis time. It automates the consideration of large parameter spaces allowing in-depth assessments to be conducted quicker than ever.


2020 ◽  
Vol 17 (4) ◽  
pp. 1692-1695
Author(s):  
K. Sathish ◽  
Kumar Sanu Raj ◽  
J. V. Adithya Chowdary ◽  
Nitish Jahagirdar

Sometimes in Flash Photography red colored patches occurred in human eyes. It is actually a reflection of bright flash light reflected from blood vessels in the eyes, giving the eye an unnatural red hue. Red-eye is a big problem in professional photography. Most red-eye reduction systems in many editing software needed the user to identify the red-eye and make an outline through the red-eye. Here we propose an Automatic Red-Eye Detection System instead. The system contains a red-eye detector that finds bunch of red pixels those are clustered to gather, a state of face detector that used to eliminate most false positives (pixel clusters that look red eyes but are not); and a redeye outline detector. All three detectors are automatically learned from the taken datasets and with a proper classifiers using boosting. For creating a fully Automatic Red-Eye Corrector this system needed to be combined with a functional Red-Eye Reduction model.


Sign in / Sign up

Export Citation Format

Share Document