scholarly journals Quantum Divide and Compute: Exploring the Effect of Different Noise Sources

2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Thomas Ayral ◽  
François-Marie Le Régent ◽  
Zain Saleem ◽  
Yuri Alexeev ◽  
Martin Suchara

AbstractOur recent work (Ayral et al. in Proceedings of IEEE computer society annual symposium on VLSI, ISVLSI, pp 138–140, 2020. 10.1109/ISVLSI49217.2020.00034) showed the first implementation of the Quantum Divide and Compute (QDC) method, which allows to break quantum circuits into smaller fragments with fewer qubits and shallower depth. This accommodates the limited number of qubits and short coherence times of quantum processors. This article investigates the impact of different noise sources—readout error, gate error and decoherence—on the success probability of the QDC procedure. We perform detailed noise modeling on the Atos Quantum Learning Machine, allowing us to understand tradeoffs and formulate recommendations about which hardware noise sources should be preferentially optimized. We also describe in detail the noise models we used to reproduce experimental runs on IBM’s Johannesburg processor. This article also includes a detailed derivation of the equations used in the QDC procedure to compute the output distribution of the original quantum circuit from the output distribution of its fragments. Finally, we analyze the computational complexity of the QDC method for the circuit under study via tensor-network considerations, and elaborate on the relation the QDC method with tensor-network simulation methods.

2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Amir H. Karamlou ◽  
William A. Simon ◽  
Amara Katabarwa ◽  
Travis L. Scholten ◽  
Borja Peropadre ◽  
...  

AbstractIn the near-term, hybrid quantum-classical algorithms hold great potential for outperforming classical approaches. Understanding how these two computing paradigms work in tandem is critical for identifying areas where such hybrid algorithms could provide a quantum advantage. In this work, we study a QAOA-based quantum optimization approach by implementing the Variational Quantum Factoring (VQF) algorithm. We execute experimental demonstrations using a superconducting quantum processor, and investigate the trade off between quantum resources (number of qubits and circuit depth) and the probability that a given biprime is successfully factored. In our experiments, the integers 1099551473989, 3127, and 6557 are factored with 3, 4, and 5 qubits, respectively, using a QAOA ansatz with up to 8 layers and we are able to identify the optimal number of circuit layers for a given instance to maximize success probability. Furthermore, we demonstrate the impact of different noise sources on the performance of QAOA, and reveal the coherent error caused by the residual ZZ-coupling between qubits as a dominant source of error in a near-term superconducting quantum processor.


Author(s):  
J. R. Barnes ◽  
C. A. Haswell

AbstractAriel’s ambitious goal to survey a quarter of known exoplanets will transform our knowledge of planetary atmospheres. Masses measured directly with the radial velocity technique are essential for well determined planetary bulk properties. Radial velocity masses will provide important checks of masses derived from atmospheric fits or alternatively can be treated as a fixed input parameter to reduce possible degeneracies in atmospheric retrievals. We quantify the impact of stellar activity on planet mass recovery for the Ariel mission sample using Sun-like spot models scaled for active stars combined with other noise sources. Planets with necessarily well-determined ephemerides will be selected for characterisation with Ariel. With this prior requirement, we simulate the derived planet mass precision as a function of the number of observations for a prospective sample of Ariel targets. We find that quadrature sampling can significantly reduce the time commitment required for follow-up RVs, and is most effective when the planetary RV signature is larger than the RV noise. For a typical radial velocity instrument operating on a 4 m class telescope and achieving 1 m s−1 precision, between ~17% and ~ 37% of the time commitment is spent on the 7% of planets with mass Mp < 10 M⊕. In many low activity cases, the time required is limited by asteroseismic and photon noise. For low mass or faint systems, we can recover masses with the same precision up to ~3 times more quickly with an instrumental precision of ~10 cm s−1.


Atmosphere ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 679
Author(s):  
Sara Cornejo-Bueno ◽  
David Casillas-Pérez ◽  
Laura Cornejo-Bueno ◽  
Mihaela I. Chidean ◽  
Antonio J. Caamaño ◽  
...  

This work presents a full statistical analysis and accurate prediction of low-visibility events due to fog, at the A-8 motor-road in Mondoñedo (Galicia, Spain). The present analysis covers two years of study, considering visibility time series and exogenous variables collected in the zone affected the most by extreme low-visibility events. This paper has then a two-fold objective: first, we carry out a statistical analysis for estimating the fittest probability distributions to the fog event duration, using the Maximum Likelihood method and an alternative method known as the L-moments method. This statistical study allows association of the low-visibility depth with the event duration, showing a clear relationship, which can be modeled with distributions for extremes such as Generalized Extreme Value and Generalized Pareto distributions. Second, we apply a neural network approach, trained by means of the ELM (Extreme Learning Machine) algorithm, to predict the occurrence of low-visibility events due to fog, from atmospheric predictive variables. This study provides a full characterization of fog events at this motor-road, in which orographic fog is predominant, causing important traffic problems during all year. We also show how the ELM approach is able to obtain highly accurate low-visibility events predictions, with a Pearson correlation coefficient of 0.8, within a half-hour time horizon, enough to initialize some protocols aiming at reducing the impact of these extreme events in the traffic of the A-8 motor road.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Yusra Khalid Bhatti ◽  
Afshan Jamil ◽  
Nudrat Nida ◽  
Muhammad Haroon Yousaf ◽  
Serestina Viriri ◽  
...  

Classroom communication involves teacher’s behavior and student’s responses. Extensive research has been done on the analysis of student’s facial expressions, but the impact of instructor’s facial expressions is yet an unexplored area of research. Facial expression recognition has the potential to predict the impact of teacher’s emotions in a classroom environment. Intelligent assessment of instructor behavior during lecture delivery not only might improve the learning environment but also could save time and resources utilized in manual assessment strategies. To address the issue of manual assessment, we propose an instructor’s facial expression recognition approach within a classroom using a feedforward learning model. First, the face is detected from the acquired lecture videos and key frames are selected, discarding all the redundant frames for effective high-level feature extraction. Then, deep features are extracted using multiple convolution neural networks along with parameter tuning which are then fed to a classifier. For fast learning and good generalization of the algorithm, a regularized extreme learning machine (RELM) classifier is employed which classifies five different expressions of the instructor within the classroom. Experiments are conducted on a newly created instructor’s facial expression dataset in classroom environments plus three benchmark facial datasets, i.e., Cohn–Kanade, the Japanese Female Facial Expression (JAFFE) dataset, and the Facial Expression Recognition 2013 (FER2013) dataset. Furthermore, the proposed method is compared with state-of-the-art techniques, traditional classifiers, and convolutional neural models. Experimentation results indicate significant performance gain on parameters such as accuracy, F1-score, and recall.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 410
Author(s):  
Johnnie Gray ◽  
Stefanos Kourtis

Tensor networks represent the state-of-the-art in computational methods across many disciplines, including the classical simulation of quantum many-body systems and quantum circuits. Several applications of current interest give rise to tensor networks with irregular geometries. Finding the best possible contraction path for such networks is a central problem, with an exponential effect on computation time and memory footprint. In this work, we implement new randomized protocols that find very high quality contraction paths for arbitrary and large tensor networks. We test our methods on a variety of benchmarks, including the random quantum circuit instances recently implemented on Google quantum chips. We find that the paths obtained can be very close to optimal, and often many orders or magnitude better than the most established approaches. As different underlying geometries suit different methods, we also introduce a hyper-optimization approach, where both the method applied and its algorithmic parameters are tuned during the path finding. The increase in quality of contraction schemes found has significant practical implications for the simulation of quantum many-body systems and particularly for the benchmarking of new quantum chips. Concretely, we estimate a speed-up of over 10,000× compared to the original expectation for the classical simulation of the Sycamore `supremacy' circuits.


2010 ◽  
Vol 8 ◽  
pp. 237-241 ◽  
Author(s):  
G. Stober ◽  
C. Jacobi ◽  
D. Keuer

Abstract. The determination of the meteoroid flux is still a scientifically challenging task. This paper focusses on the impact of extraterrestrial noise sources as well as atmospheric phenomena on the observation of specular meteor echoes. The effect of cosmic radio noise on the meteor detection process is estimated by computing the relative difference between radio loud and radio quiet areas and comparing the monthly averaged meteor flux for fixed signal-to-noise ratios or fixed electron line density measurements. Related to the cosmic radio noise is the influence of D-layer absorption or interference with sporadic E-layers, which can lead to apparent day-to-day variation of the meteor flux of 15–20%.


Author(s):  
Mónica Galdo Vega ◽  
Jesus Manuel Fernandez Oro ◽  
Katia María Argüelles Díaz ◽  
Carlos Santolaria Morros

This second part is devoted to the identification of vortex sound sources in low-speed turbomachinery. As a starting point, the time-resolved evolution of the vortical motions associated to the wake shear layers (reported in the first part of the present study) is employed to obtain vorticity distributions in both blade-to-blade and traverse locations throughout the axial fan stage. Following, the Powell analogy for generation of vortex sound is revisited to obtain the noise sources in the nearfield region of the fan. Both numerical and experimental databases presented previously are now post-processed to achieve a deep understanding of the aeroacoustic behavior of the vortical scales present in the flow. A LES simulation at midspan, using a 2.5D scheme, allows an accurate description of the turn-out time of the shedding vortices, within high-density meshes in the blades and vanes passages, and a correct modeling of the dynamics of turbulence. Besides, thermal anemometry has been employed with a two-wire probe to measure the planar flow in the midspan sections of the fan. Statistical procedures and signal conditioning of velocity traces have confirmed experimentally the unsteady flow patterns devised in the numerical model. The comparison of the rotor-stator and the stator-rotor configurations provides the influence of the wake mixing and the nucleation of turbulent spots in the distribution of the Powell source terms. Moreover, the relation between the turbomachine configuration and the generation of vortex sound can be established, including the impact of the operating conditions and the contributions of the interaction mechanisms.


2021 ◽  
Author(s):  
Alexandru Tiganescu ◽  
Bogdan Grecu ◽  
Iolanda-Gabriela Craifaleanu ◽  
Dragos Toma-Danila ◽  
Stefan-Florin Balan

&lt;p&gt;The impact of natural hazards on structures and infrastructures is a critical issue that needs to be properly addressed by both public and private entities. To better cope with seismic hazard and to mitigate the risk, long-term multi-sensor infrastructure monitoring represents a useful tool for acquiring information on their condition and vulnerability. However, the current increasing data volume collected using sensors is not suitable to be processed with classical standalone methods. Thus, automatic algorithms and decision-making frameworks should be developed to use this data, with minimum intervention from human operators. A case-study for the application of advanced methods is focused on the headquarters of the Institute for Atomic Physics, a 11-story reinforced concrete building, located near Bucharest, Romania. The instrumentation scheme consists of accelerometers installed at the basement, at an intermediate floor and at the top of the structure. The data were continuously recorded, starting with December 2013. More than 80 seismic events with moment magnitude, M&lt;sub&gt;W&lt;/sub&gt;, larger than 3.8 were recorded during the monitoring period. The current study covers the long-term evolution and variation of dynamic parameters (one value per hour), based on both ambient noise sources and small and medium magnitude seismic events. The seasonal variation of these parameters will be determined, as well as their daily variation and the differences between values obtained from ambient noise and from earthquake-induced vibrations. Other atmospheric parameters (e.g. temperature, precipitation, wind speed) will be considered in future studies. The goal of the PREVENT project, in the framework of which the research is performed, is to collect multi-disciplinary data and to integrate them into a complex monitoring system. The current study achieved the first step, focusing on data from the seismic sensors and setting up the premises for a multi-sensor, multi-parameter, more reliable infrastructure monitoring system.&amp;#160;&amp;#160;&lt;/p&gt;


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Yan Ding ◽  
Pan Dong ◽  
Zhipeng Li ◽  
Yusong Tan ◽  
Chenlin Huang ◽  
...  

The root privilege escalation attack is extremely destructive to the security of the Android system. SEAndroid implements mandatory access control to the system through the SELinux security policy at the kernel mode, making the general root privilege escalation attacks unenforceable. However, malicious attackers can exploit the Linux kernel vulnerability of privilege escalation to modify the SELinux security labels of the process arbitrarily to obtain the desired permissions and undermine system security. Therefore, investigating the protection method of the security labels in the SELinux kernel is urgent. And the impact on the existing security configuration of the system must also be reduced. This paper proposes an optimization scheme of the SELinux mechanism based on security label randomization to solve the aforementioned problem. At the system runtime, the system randomizes the mapping of the security labels inside and outside the kernel to protect the privileged security labels of the system from illegal obtainment and tampering by attackers. This method is transparent to users; therefore, users do not need to modify the existing system security configuration. A tamper-proof detection method of SELinux security label is also proposed to further improve the security of the method. It detects and corrects the malicious tampering behaviors of the security label in the critical process of the system timely. The above methods are implemented in the Linux system, and the effectiveness of security defense is proven through theoretical analysis and experimental verification. Numerous experiments show that the effect of this method on system performance is less than 1%, and the success probability of root privilege escalation attack is less than 10−9.


Author(s):  
Boris Goncharov ◽  
D J Reardon ◽  
R M Shannon ◽  
Xing-Jiang Zhu ◽  
Eric Thrane ◽  
...  

Abstract Pulsar timing array projects measure the pulse arrival times of millisecond pulsars for the primary purpose of detecting nanohertz-frequency gravitational waves. The measurements include contributions from a number of astrophysical and instrumental processes, which can either be deterministic or stochastic. It is necessary to develop robust statistical and physical models for these noise processes because incorrect models diminish sensitivity and may cause a spurious gravitational wave detection. Here we characterise noise processes for the 26 pulsars in the second data release of the Parkes Pulsar Timing Array using Bayesian inference. In addition to well-studied noise sources found previously in pulsar timing array data sets such as achromatic timing noise and dispersion measure variations, we identify new noise sources including time-correlated chromatic noise that we attribute to variations in pulse scattering. We also identify “exponential dip” events in four pulsars, which we attribute to magnetospheric effects as evidenced by pulse profile shape changes observed for three of the pulsars. This includes an event in PSR J1713+0747, which had previously been attributed to interstellar propagation. We present noise models to be used in searches for gravitational waves. We outline a robust methodology to evaluate the performance of noise models and identify unknown signals in the data. The detection of variations in pulse profiles highlights the need to develop efficient profile domain timing methods.


Sign in / Sign up

Export Citation Format

Share Document