sampling cost
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 20)

H-INDEX

8
(FIVE YEARS 3)

Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 600
Author(s):  
Jiaqing Jiang ◽  
Kun Wang ◽  
Xin Wang

Completely positive and trace-preserving maps characterize physically implementable quantum operations. On the other hand, general linear maps, such as positive but not completely positive maps, which can not be physically implemented, are fundamental ingredients in quantum information, both in theoretical and practical perspectives. This raises the question of how well one can simulate or approximate the action of a general linear map by physically implementable operations. In this work, we introduce a systematic framework to resolve this task using the quasiprobability decomposition technique. We decompose a target linear map into a linear combination of physically implementable operations and introduce the physical implementability measure as the least amount of negative portion that the quasiprobability must pertain, which directly quantifies the cost of simulating a given map using physically implementable quantum operations. We show this measure is efficiently computable by semidefinite programs and prove several properties of this measure, such as faithfulness, additivity, and unitary invariance. We derive lower and upper bounds in terms of the Choi operator's trace norm and obtain analytic expressions for several linear maps of practical interests. Furthermore, we endow this measure with an operational meaning within the quantum error mitigation scenario: it establishes the lower bound of the sampling cost achievable via the quasiprobability decomposition technique. In particular, for parallel quantum noises, we show that global error mitigation has no advantage over local error mitigation.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 548
Author(s):  
Zhenyu Cai

Even with the recent rapid developments in quantum hardware, noise remains the biggest challenge for the practical applications of any near-term quantum devices. Full quantum error correction cannot be implemented in these devices due to their limited scale. Therefore instead of relying on engineered code symmetry, symmetry verification was developed which uses the inherent symmetry within the physical problem we try to solve. In this article, we develop a general framework named symmetry expansion which provides a wide spectrum of symmetry-based error mitigation schemes beyond symmetry verification, enabling us to achieve different balances between the estimation bias and the sampling cost of the scheme. We show that certain symmetry expansion schemes can achieve a smaller estimation bias than symmetry verification through cancellation between the biases due to the detectable and undetectable noise components. A practical way to search for such a small-bias scheme is introduced. By numerically simulating the Fermi-Hubbard model for energy estimation, the small-bias symmetry expansion we found can achieve an estimation bias 6 to 9 times below what is achievable by symmetry verification when the average number of circuit errors is between 1 to 2. The corresponding sampling cost for random shot noise reduction is just 2 to 6 times higher than symmetry verification. Beyond symmetries inherent to the physical problem, our formalism is also applicable to engineered symmetries. For example, the recent scheme for exponential error suppression using multiple noisy copies of the quantum device is just a special case of symmetry expansion using the permutation symmetry among the copies.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6193
Author(s):  
Jie Yang ◽  
Xinchang Zhang ◽  
Yun Huang

Classification is a fundamental task for airborne laser scanning (ALS) point cloud processing and applications. This task is challenging due to outdoor scenes with high complexity and point clouds with irregular distribution. Many existing methods based on deep learning techniques have drawbacks, such as complex pre/post-processing steps, an expensive sampling cost, and a limited receptive field size. In this paper, we propose a graph attention feature fusion network (GAFFNet) that can achieve a satisfactory classification performance by capturing wider contextual information of the ALS point cloud. Based on the graph attention mechanism, we first design a neighborhood feature fusion unit and an extended neighborhood feature fusion block, which effectively increases the receptive field for each point. On this basis, we further design a neural network based on encoder–decoder architecture to obtain the semantic features of point clouds at different levels, allowing us to achieve a more accurate classification. We evaluate the performance of our method on a publicly available ALS point cloud dataset provided by the International Society for Photogrammetry and Remote Sensing (ISPRS). The experimental results show that our method can effectively distinguish nine types of ground objects. We achieve more satisfactory results on different evaluation metrics when compared with the results obtained via other approaches.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0256699
Author(s):  
Azhar Mehmood Abbasi ◽  
Muhammad Yousaf Shad

This paper considers the concomitant-based rank set sampling (CRSS) for estimation of the sensitive proportion. It is shown that CRSS procedure provides an unbiased estimator of the population sensitive proportion, and it is always more precise than corresponding sample sensitive proportion (Warner SL (1965)) that based on simple random sampling (SRS) without increasing sampling cost. Additionally, a new estimator based on ratio method is introduced using CRSS protocol, preserving the respondent’s confidentiality through a randomizing device. The numerical results of these estimators are obtained by using numerical integration technique. An application to real data is also given to support the methods.


Author(s):  
Ke Wang ◽  
Qingwen Xue ◽  
Jian John Lu

Identifying high-risk drivers before an accident happens is necessary for traffic accident control and prevention. Due to the class-imbalance nature of driving data, high-risk samples as the minority class are usually ill-treated by standard classification algorithms. Instead of applying preset sampling or cost-sensitive learning, this paper proposes a novel automated machine learning framework that simultaneously and automatically searches for the optimal sampling, cost-sensitive loss function, and probability calibration to handle class-imbalance problem in recognition of risky drivers. The hyperparameters that control sampling ratio and class weight, along with other hyperparameters, are optimized by Bayesian optimization. To demonstrate the performance of the proposed automated learning framework, we establish a risky driver recognition model as a case study, using video-extracted vehicle trajectory data of 2427 private cars on a German highway. Based on rear-end collision risk evaluation, only 4.29% of all drivers are labeled as risky drivers. The inputs of the recognition model are the discrete Fourier transform coefficients of target vehicle’s longitudinal speed, lateral speed, and the gap between the target vehicle and its preceding vehicle. Among 12 sampling methods, 2 cost-sensitive loss functions, and 2 probability calibration methods, the result of automated machine learning is consistent with manual searching but much more computation-efficient. We find that the combination of Support Vector Machine-based Synthetic Minority Oversampling TEchnique (SVMSMOTE) sampling, cost-sensitive cross-entropy loss function, and isotonic regression can significantly improve the recognition ability and reduce the error of predicted probability.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Zhenyu Cai

AbstractNoise in quantum hardware remains the biggest roadblock for the implementation of quantum computers. To fight the noise in the practical application of near-term quantum computers, instead of relying on quantum error correction which requires large qubit overhead, we turn to quantum error mitigation, in which we make use of extra measurements. Error extrapolation is an error mitigation technique that has been successfully implemented experimentally. Numerical simulation and heuristic arguments have indicated that exponential curves are effective for extrapolation in the large circuit limit with an expected circuit error count around unity. In this Article, we extend this to multi-exponential error extrapolation and provide more rigorous proof for its effectiveness under Pauli noise. This is further validated via our numerical simulations, showing orders of magnitude improvements in the estimation accuracy over single-exponential extrapolation. Moreover, we develop methods to combine error extrapolation with two other error mitigation techniques: quasi-probability and symmetry verification, through exploiting features of these individual techniques. As shown in our simulation, our combined method can achieve low estimation bias with a sampling cost multiple times smaller than quasi-probability while without needing to be able to adjust the hardware error rate as required in canonical error extrapolation.


2021 ◽  
Vol 6 (1) ◽  
pp. 1
Author(s):  
Yonghao Zhao

Nowadays, people’s demand for indoor location information is more and more, which continuously promotes the development of indoor positioning technology. In the field of indoor positioning, fingerprint based indoor positioning algorithm still accounts for a large proportion. However, the operation of this method in the offline stage is too cumbersome and time-consuming, which makes its disadvantages obvious, and requires a lot of manpower and time to sample and maintain. Therefore, in view of this phenomenon, an improved algorithm based on nearest neighbor interpolation is designed in this paper, which reduces the measurement of actual sampling points when establishing fingerprint map. At the same time, some simulation points are added to expand fingerprint map, so as to ensure that the positioning error will not become larger or even better. Experimental results show that this method can further improve the positioning accuracy while saving the sampling cost.


2021 ◽  
Vol 4 ◽  
Author(s):  
Charo López-Blanco ◽  
Antonio García-Alix ◽  
Yi Wang ◽  
Laura S. Epp

Due to similarities in morphological features together with strong dispersal abilities, it was thought that some groups of zooplankton (e.g. rotifers, copepods, and cladocerans) have cosmopolitan distributions. In the particular case of cladocerans, recent molecular studies using DNA barcode regions have indicated a different picture, including the existence of multiple regional endemic species and geographical phylogroups; even at very small geographical scales. This has demostrated that cladocera species are less widely distributed than assumed. Morphological identifications of these animals require expertise and high taxonomic specialization. Even so, species identifications are hampered by the small size of the organisms (especially from the littoral zone) and by the sampling cost for obtaining rare species and both parthenogenetic and gamogenetic specimens. The use of molecular techniques can provide new tools to identify cryptic diversity, and by being added to taxonomical approaches, provide more precise data of biodiversity. However, the accuracy of species assignment in metabarcoding relies on the availability of a DNA reference library, which is challenging in areas with high endemicity rates such as the Iberian Peninsula. A preliminary compilation of the available molecular data for the cytochrome (COI) in public repositories (Barcode of Life Data Systems and NCBI GenBank) shows that the available sequences only cover ~60% of the Iberian freshwater cladocerans. The family Daphniidae is very well represented, while the family Chydoridae, which contained most of the Iberian endemism, is underrepresented. We have identified the gaps, and are now focusing on collecting the target organisms to fill the missing taxa of the Iberian library. A compendium of the sampling points, species recovered so far, pitfalls, and future strategy is presented here. The effective completion of a DNA database for cladocerans will have applications not only in biomonitoring programs but to develop DNA-based methods in paleolimnology.


Sign in / Sign up

Export Citation Format

Share Document