Quantum Circuit Realization of the Bilinear Interpolation Method for GQIR

2017 ◽  
Vol 56 (9) ◽  
pp. 2966-2980 ◽  
Author(s):  
Ri-Gui Zhou ◽  
Xingao Liu ◽  
Jia Luo
2019 ◽  
Vol 64 (1) ◽  
Author(s):  
Ahmet Aydın ◽  
Cemil Keskinoğlu ◽  
Umut Kökbaş ◽  
Abdullah Tuli

Ultrasound is used in many analysis studies, including liquid mixtures. Many mixtures are analyzed to understand their contents or properties in different situations. One of these mixtures is the ethanol-water combination. In this study, the amount of ethanol in the liquid mixture was determined noninvasively by the ultrasonic method using a microcontroller-based system. The results show that the measurements obtained were within the p<0.05 confidence interval. The characteristics evaluation of the system shows that the system can detect ethanol concentration as low as 0.552 g/L, thus the system has a broad and linear determination range for ethanol. Although the system is calibrated and tested with ethanol-water mixture, it can be used for any mixture that changes density related to the substance concentration, including different alcohols which are soluble in water (glycols, glycoethers, etc.) or any other material (solid or liquid) which is soluble in alcohol or different liquid solvent. The system has so many advantages that make it possible to use comfortably in many areas where the amount of ethanol contained in the mixture is essential. These advantages are the high accuracy and sensitivity, being noninvasive, portable, and not having a destructive effect on the substance.   Resumen. El ultrasonido es utilizado en muchos estudios incluyendo las mezclas liquidas. Se analizan varias mezclas para entender sus contenidos y propiedades en diferentes situaciones. Una de estas mezclas es la combinación de etanol-agua. En este estudio, la cantidad de etanol en la mezcla líquida fue determinada de manera no invasiva con el método ultrasonico utilizando un sistema basado en microcontrolador. Los resultados muestran que las mediciones obtenidas se encontraban dentro de un intervalo de confianza de p<0.05. Las características de evaluación del sistema muestran que se puede detectar etanol a una concentración tan baja como 0.552 g/L, por lo tanto, el sistema tiene un rango de determinación linear amplio para etanol. Aunque el sistema se calibra y prueba con mezcla de etanol-agua, puede ser utilizado para cualquier mezcla que cambia de densidad en relación con la concentración de la substancia, incluyendo diferentes alcoholes que son solubles en agua (glicoles, glicoeteres, etc) o cualquier otro material (sólido o líquido) que sea soluble en alcohol o en algún solvente líquido diferente. El sistema tiene muchas ventajas que hacen posible su utilización en muchas áreas donde la cantidad de etanol contenida en la mezcla es esencial. Estas ventajas son de alta precisión y sensiblididad al ser no invasivo, portátil y al no tener un efecto destructivo sobre la sustancia.


2021 ◽  
Author(s):  
Hideyuki Miyahara ◽  
Vwani Roychowdhury

Abstract The paradigm of variational quantum classifiers (VQCs) encodes classical information as quantum states, followed by quantum processing and then measurements to generate classical predictions. VQCs are promising candidates for efficient utilizations of noisy intermediate scale quantum (NISQ) devices: classifiers involving M-dimensional datasets can be implemented with only ⌈log2 M⌉ qubits by using an amplitude encoding. A general framework for designing and training VQCs, however, is lacking. An encouraging specific embodiment of VQCs, quantum circuit learning (QCL), utilizes an ansatz: a circuit with a predetermined circuit geometry and parametrized gates expressing a time-evolution unitary operator; training involves learning the gate parameters through a gradient- descent algorithm where the gradients themselves can be efficiently estimated by the quantum circuit. The representational power of QCL, however, depends strongly on the choice of the ansatz, as it limits the range of possible unitary operators that a VQC can search over. Equally importantly, the landscape of the optimization problem may have challenging properties such as barren plateaus and the associated gradient-descent algorithm may not find good local minima. Thus, it is critically important to estimate (i) the price of ansatz; that is, the gap between the performance of QCL and the performance of ansatz-independent VQCs, and (ii) the price of using quantum circuits as classical classifiers: that is, the performance gap between VQCs and equivalent classical classifiers. This paper develops a computational framework to address both these open problems. First, it shows that VQCs, including QCL, fit inside the well-known kernel method. Next it introduces a framework for efficiently designing ansatz-independent VQCs, which we call the unitary kernel method (UKM). The UKM framework enables one to estimate the first known bounds on both the price of anstaz and the price of any speedup advantages of VQCs: numerical results with datatsets of various dimensions, ranging from 4 to 256, show that the ansatz-induced gap can vary between 10−20%, while the VQC-induced gap (between VQC and kernel method) can vary between 10−16%. To further understand the role of ansatz in VQCs, we also propose a method of decomposing a given unitary operator into a quantum circuit, which we call the variational circuit realization (VCR): given any parameterized circuit block (as for example, used in QCL), it finds optimal parameters and the number of layers of the circuit block required to approximate any target unitary operator with a given precision.


2016 ◽  
Vol 45 (7) ◽  
pp. 70710001
Author(s):  
梁志虎 LIANG Zhi-hu ◽  
张小宁 ZHANG Xiao-ning ◽  
岳俊峰 YUE Jun-feng ◽  
屠震涛 TU Zhen-tao ◽  
黄泰钧 HUANG Tai-jun ◽  
...  

Author(s):  
Riccardo Rasconi ◽  
Angelo Oddi

Quantum Computing represents the next big step towards speed boost in computation, which promises major breakthroughs in several disciplines including Artificial Intelligence. This paper investigates the performance of a genetic algorithm to optimize the realization (compilation) of nearest-neighbor compliant quantum circuits. Currrent technological limitations (e.g., decoherence effect) impose that the overall duration (makespan) of the quantum circuit realization be minimized, and therefore the makespanminimization problem of compiling quantum algorithms on present or future quantum machines is dragging increasing attention in the AI community. In our genetic algorithm, a solution is built utilizing a novel chromosome encoding where each gene controls the iterative selection of a quantum gate to be inserted in the solution, over a lexicographic double-key ranking returned by a heuristic function recently published in the literature.Our algorithm has been tested on a set of quantum circuit benchmark instances of increasing sizes available from the recent literature. We demonstrate that our genetic approach obtains very encouraging results that outperform the solutions obtained in previous research against the same benchmark, succeeding in significantly improving the makespan values for a great number of instances.


2020 ◽  
Vol 10 (10) ◽  
pp. 3658
Author(s):  
Karshiev Sanjar ◽  
Olimov Bekhzod ◽  
Jaeil Kim ◽  
Jaesoo Kim ◽  
Anand Paul ◽  
...  

The early and accurate diagnosis of skin cancer is crucial for providing patients with advanced treatment by focusing medical personnel on specific parts of the skin. Networks based on encoder–decoder architectures have been effectively implemented for numerous computer-vision applications. U-Net, one of CNN architectures based on the encoder–decoder network, has achieved successful performance for skin-lesion segmentation. However, this network has several drawbacks caused by its upsampling method and activation function. In this paper, a fully convolutional network and its architecture are proposed with a modified U-Net, in which a bilinear interpolation method is used for upsampling with a block of convolution layers followed by parametric rectified linear-unit non-linearity. To avoid overfitting, a dropout is applied after each convolution block. The results demonstrate that our recommended technique achieves state-of-the-art performance for skin-lesion segmentation with 94% pixel accuracy and a 88% dice coefficient, respectively.


2020 ◽  
Vol 174 (3-4) ◽  
pp. 259-281
Author(s):  
Angelo Oddi ◽  
Riccardo Rasconi

In this work we investigate the performance of greedy randomised search (GRS) techniques to the problem of compiling quantum circuits to emerging quantum hardware. Quantum computing (QC) represents the next big step towards power consumption minimisation and CPU speed boost in the future of computing machines. Quantum computing uses quantum gates that manipulate multi-valued bits (qubits). A quantum circuit is composed of a number of qubits and a series of quantum gates that operate on those qubits, and whose execution realises a specific quantum algorithm. Current quantum computing technologies limit the qubit interaction distance allowing the execution of gates between adjacent qubits only. This has opened the way to the exploration of possible techniques aimed at guaranteeing nearest-neighbor (NN) compliance in any quantum circuit through the addition of a number of so-called swap gates between adjacent qubits. In addition, technological limitations (decoherence effect) impose that the overall duration (makespan) of the quantum circuit realization be minimized. One core contribution of the paper is the definition of two lexicographic ranking functions for quantum gate selection, using two keys: one key acts as a global closure metric to minimise the solution makespan; the second one is a local metric, which favours the mutual approach of the closest qstates pairs. We present a GRS procedure that synthesises NN-compliant quantum circuits realizations, starting from a set of benchmark instances of different size belonging to the Quantum Approximate Optimization Algorithm (QAOA) class tailored for the MaxCut problem. We propose a comparison between the presented meta-heuristics and the approaches used in the recent literature against the same benchmarks, both from the CPU efficiency and from the solution quality standpoint. In particular, we compare our approach against a reference benchmark initially proposed and subsequently expanded in [1] by considering: (i) variable qubit state initialisation and (ii) crosstalk constraints that further restrict parallel gate execution.


2018 ◽  
Vol 58 (2) ◽  
pp. 415-435 ◽  
Author(s):  
Ping Fan ◽  
Ri-Gui Zhou ◽  
WenWen Hu ◽  
Naihuan Jing

2017 ◽  
Vol 7 (1) ◽  
Author(s):  
Ri-Gui Zhou ◽  
Wenwen Hu ◽  
Ping Fan ◽  
Hou Ian

2021 ◽  
Author(s):  
W. Logan Downing ◽  
Howell Li ◽  
William T. Morgan ◽  
Cassandra McKee ◽  
Darcy M. Bullock

Rain impacts roadways such as wet pavement, standing water, decreased visibility, and wind gusts and can lead to hazardous driving conditions. This study investigates the use of high fidelity Doppler data at 1 km spatial and 2-minute temporal resolution in combination with commercial probe speed data on freeways. Segment-based space-mean speeds were used and drops in speeds during rainfall events of 5.5 mm/hour or greater over a one-month period on a section of four to six-lane interstate were assessed. Speed reductions were evaluated as a time series over a 1-hour window with the rain data. Three interpolation methods for estimating rainfall rates were tested and seven metrics were developed for the analysis. The study found sharp drops in speed of more than 40 mph occurred at estimated rainfall rates of 30 mm/hour or greater, but the drops did not become more severe beyond this threshold. The average time of first detected rainfall to impacting speeds was 17 minutes. The bilinear method detected the greatest number of events during the 1-month period, with the most conservative rate of predicted rainfall. The range of rainfall intensities were estimated between 7.5 to 106 mm/hour for the 39 events. This range was much greater than the heavy rainfall categorization at 16 mm/hour in previous studies reported in the literature. The bilinear interpolation method for Doppler data is recommended because it detected the greatest number of events and had the longest rain duration and lowest estimated maximum rainfall out of three methods tested, suggesting the method balanced awareness of the weather conditions around the roadway with isolated, localized rain intensities.


Sign in / Sign up

Export Citation Format

Share Document