scholarly journals HHL Analysis and Simulation Verification Based on Origin Quantum Platform

2021 ◽  
Vol 2113 (1) ◽  
pp. 012083
Author(s):  
Xiaonan Liu ◽  
Lina Jing ◽  
Lin Han ◽  
Jie Gao

Abstract Solving large-scale linear equations is of great significance in many engineering fields, such as weather forecasting and bioengineering. The classical computer solves the linear equations, no matter adopting the elimination method or Kramer’s rule, the time required for solving is in a polynomial relationship with the scale of the equation system. With the advent of the era of big data, the integration of transistors is getting higher and higher. When the size of transistors is close to the order of electron diameter, quantum tunneling will occur, and Moore’s Law will not be followed. Therefore, the traditional computing model will not be able to meet the demand. In this paper, through an in-depth study of the classic HHL algorithm, a small-scale quantum circuit model is proposed to solve a 2×2 linear equations, and the circuit diagram and programming are used to simulate and verify on the Origin Quantum Platform. The fidelity under different parameter values reaches more than 90%. For the case where the matrix to be solved is a sparse matrix, the quantum algorithm has an exponential speed improvement over the best known classical algorithm.

2012 ◽  
Vol 27 (1) ◽  
pp. 124-140 ◽  
Author(s):  
Bin Liu ◽  
Lian Xie

Abstract Accurately forecasting a tropical cyclone’s (TC) track and intensity remains one of the top priorities in weather forecasting. A dynamical downscaling approach based on the scale-selective data assimilation (SSDA) method is applied to demonstrate its effectiveness in TC track and intensity forecasting. The SSDA approach retains the merits of global models in representing large-scale environmental flows and regional models in describing small-scale characteristics. The regional model is driven from the model domain interior by assimilating large-scale flows from global models, as well as from the model lateral boundaries by the conventional sponge zone relaxation. By using Hurricane Felix (2007) as a demonstration case, it is shown that, by assimilating large-scale flows from the Global Forecast System (GFS) forecasts into the regional model, the SSDA experiments perform better than both the original GFS forecasts and the control experiments, in which the regional model is only driven by lateral boundary conditions. The overall mean track forecast error for the SSDA experiments is reduced by over 40% relative to the control experiments, and by about 30% relative to the GFS forecasts, respectively. In terms of TC intensity, benefiting from higher grid resolution that better represents regional and small-scale processes, both the control and SSDA runs outperform the GFS forecasts. The SSDA runs show approximately 14% less overall mean intensity forecast error than do the control runs. It should be noted that, for the Felix case, the advantage of SSDA becomes more evident for forecasts with a lead time longer than 48 h.


1982 ◽  
Vol 15 ◽  
Author(s):  
J. H. Westsik ◽  
C. O. Harvey ◽  
F. P. Roberts ◽  
W. A. Ross ◽  
R. E. Thornhill

ABSTRACTDuring the past year we have conducted a modified MCC-1 leach test on a 145 kg block of a cast cement waste form. The leach vessel was a 200 liter Teflon®-lined drum and contained 97.5 liters of deionized water. The results of this large-scale leach test were compared with the results of standard MCC-1 tests (40 ml) on smaller samples of the same waste form. The ratio of leachate volumes between the large and small scale tests was 2500 and the ratio of sample masses was 150,000. The cast cement samples for both tests contained plutonium-doped incinerator ash.The leachates from these tests were analyzed for both plutonium and the matrix elements. Evaluation of plutonium plateout in the large-scale test indicated that the majority of the plutonium leached from the samples deposits onto vessel walls and little (<3 × 10−12M) remains in solution. Comparison of elemental concentrations in the leachates indicates some differences up to 5X in the concentration in the large- and small-scale tests. The differences are attributed to differences in the solubilities of Ca, Si, and Fe at pH ˜11.5 and at pH ˜12.5. The higher pH observed for the large-scale test is a result of the larger quantities of sodium in the large block of cement.


2012 ◽  
Vol 58 (3) ◽  
pp. 285-295
Author(s):  
Diego Ernesto Cortés Udave ◽  
Jan Ogrodzki ◽  
Miguel Angel Gutiérrez De Anda

Abstract Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries is managed through concurrent processes. In this paper it is shown how the numerical complexity of this problem (and thus its solution time) can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the Bordered-Block Diagonal (BBD) matrix structure. This BBD-parallel approach may give a considerable profit though it is strongly dependent on the system topology. This paper presents a theoretical foundation of the algorithm, its implementation, and numerical complexity analysis in virtue of practical measurements of matrix operations.


2014 ◽  
Vol 27 (4) ◽  
pp. 1821-1825 ◽  
Author(s):  
Douglas Maraun

Abstract In his comment, G. Bürger criticizes the conclusion that inflation of trends by quantile mapping is an adverse effect. He assumes that the argument would be “based on the belief that long-term trends and along with them future climate signals are to be large scale.” His line of argument reverts to the so-called inflated regression. Here it is shown, by referring to previous critiques of inflation and standard literature in statistical modeling as well as weather forecasting, that inflation is built upon a wrong understanding of explained versus unexplained variability and prediction versus simulation. It is argued that a sound regression-based downscaling can in principle introduce systematic local variability in long-term trends, but inflation systematically deteriorates the representation of trends. Furthermore, it is demonstrated that inflation by construction deteriorates weather forecasts and is not able to correctly simulate small-scale spatiotemporal structure.


2014 ◽  
Vol 4 (1) ◽  
Author(s):  
Stefanie Barz ◽  
Ivan Kassal ◽  
Martin Ringbauer ◽  
Yannick Ole Lipp ◽  
Borivoje Dakić ◽  
...  

Abstract Large-scale quantum computers will require the ability to apply long sequences of entangling gates to many qubits. In a photonic architecture, where single-qubit gates can be performed easily and precisely, the application of consecutive two-qubit entangling gates has been a significant obstacle. Here, we demonstrate a two-qubit photonic quantum processor that implements two consecutive CNOT gates on the same pair of polarisation-encoded qubits. To demonstrate the flexibility of our system, we implement various instances of the quantum algorithm for solving of systems of linear equations.


2006 ◽  
Vol 16 (02) ◽  
pp. 421-436 ◽  
Author(s):  
HIDEKI HASEGAWA ◽  
SEIYA KASAI ◽  
TAKETOMO SATO

In an attempt to realize tiny "knowledge vehicles" called intelligent quantum (IQ) chips for use in the coming ubiquitous network society, this paper presents the present status and future prospects of ultra-small-size and ultra-low-power III-V quantum logic large scale integrated circuits based on a novel hexagonal binary-decision diagram (BDD) quantum circuit architecture. Here, quantum transport in path switching node devices formed on III-V semiconductor-based hexagonal nanowire networks is controlled by nanometer scale Schottky wrap gates (WPGs) to realize arbitrary combinational logic function. Feasibility of the approach is shown through fabrication of basic node devices and various small-scale circuits, and approaches for higher density integration and larger scale circuits are discussed.


Author(s):  
Ming Hou ◽  
Brahim Chaib-draa

In this work, we develop a fast sequential low-rank tensor regression framework, namely recursive higher-order partial least squares (RHOPLS). It addresses the great challenges posed by the limited storage space and fast processing time required by dynamic environments when dealing with large-scale high-speed general tensor sequences. Smartly integrating a low-rank modification strategy of Tucker into a PLS-based framework, we efficiently update the regression coefficients by effectively merging the new data into the previous low-rank approximation of the model at a small-scale factor (feature) level instead of the large raw data (observation) level. Unlike batch models, which require accessing the entire data, RHOPLS conducts a blockwise recursive calculation scheme and thus only a small set of factors is needed to be stored. Our approach is orders of magnitude faster than all other methods while maintaining a highly comparable predictability with the cutting-edge batch methods, as verified on challenging real-life tasks.


Geophysics ◽  
1957 ◽  
Vol 22 (1) ◽  
pp. 9-21 ◽  
Author(s):  
A. E. Scheidegger ◽  
P. L. Willmore

During large‐scale seismic surveys it is often impossible to arrange shot points and seismometers in a simple pattern, so that the data cannot be treated as simply as those of small‐scale prospecting arrays. It is shown that the problem of reducing seismic observations from m shot points and n seismometers (where there is no simple pattern of arranging these) is equivalent to solving (m+n) normal equations with (m+n) unknowns. These normal equations are linear, the matrix of their coefficients is symmetric. The problem of inverting that matrix is solved here by the calculus of “Cracovians,” mathematical entities similar to matrices. When all the shots have been observed at all the seismometers, the solution can even be given generally. Otherwise, a certain amount of computation is necessary. An example is given.


1985 ◽  
Vol 29 ◽  
pp. 113-118 ◽  
Author(s):  
Balder Ortner

It is well known that all of the six independent components of the strain tensor can be calculated if the linear strains in six appropriate directions are known (e.g.). That calculation is to solve a system of linear equations, whose coefficients are defined by the orientations of the measured planes. The strains are determined by lattice plane distance measurements using X-rays.The linear equation system can only be solved if the matrix of coefficients has rank. Whether this condition is met or not can be decided without calculating a determinant just from geometric relationships among the planes to be measured. A demand beyond that necessary condition is to make the matrix of coefficients so that the accuracy of the calculated strain tensor is best. From error calculation we know that there exist distinct ratios between the inevitable measurement errors and the errors of the calculated strain components. These ratios depend strongly on the geometric relationship among the lattice planes. It is the purpose of this paper to show how lattice planes should be chosen in order to get these ratios as small as possible i.e. to get a maximum of accuracy at a given number of measurements, or a minimum of experimental effort if a distinct limit of error is to be reached.


2014 ◽  
Vol 7 (4) ◽  
pp. 1819-1828 ◽  
Author(s):  
H. Wang ◽  
X.-Y. Huang ◽  
D. Xu ◽  
J. Liu

Abstract. Due to limitation of the domain size and limited observations used in regional data assimilation and forecasting systems, regional forecasts suffer a general deficiency in effectively representing large-scale features such as those in global analyses and forecasts. In this paper, a scale-dependent blending scheme using a low-pass Raymond tangent implicit filter was implemented in the Data Assimilation system of the Weather Research and Forecasting model (WRFDA) to reintroduce large-scale weather features from global model analysis into the WRFDA analysis. The impact of the blending method on regional forecasts was assessed by conducting full cycle data assimilation and forecasting experiments for a 2-week-long period in September 2012. It is found that there are obvious large-scale forecast errors in the regional WRFDA system running in full cycle mode without the blending scheme. The scale-dependent blending scheme can efficiently reintroduce the large-scale information from National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) analyses, and keep small-scale information from WRF analyses. The blending scheme is shown to reduce analysis and forecasting error of wind, temperature and humidity up to 24 h compared to the full cycle experiments without blending. It is also shown to increase precipitation prediction skills in the first 6 h forecasts.


Sign in / Sign up

Export Citation Format

Share Document