optimal measurements
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 17)

H-INDEX

13
(FIVE YEARS 3)

Author(s):  
Parames Chutima ◽  
Nicha Krisanaphan

Crew pairing is the primary cost checkpoint in airline crew scheduling. Because the crew cost comes second after the fuel cost, a substantial cost saving can be gained from effective crew pairing. In this paper, the cockpit crew pairing problem (CCPP) of a budget airline was studied. Unlike the conventional CCPP that focuses solely on the cost component, many more objectives deemed to be no less important than cost minimisation were also taken into consideration. The adaptive non-dominated sorting differential algorithm III (ANSDE III) was proposed to optimise the CCPP against many objectives simultaneously. The performance of ANSDE III was compared against the NSGA III, MOEA/D, and MODE algorithms under several Pareto optimal measurements, where ANSDE III outperformed the others in every metric.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1634
Author(s):  
Eoin O’Connor ◽  
Bassano Vacchini ◽  
Steve Campbell

We extend collisional quantum thermometry schemes to allow for stochasticity in the waiting time between successive collisions. We establish that introducing randomness through a suitable waiting time distribution, the Weibull distribution, allows us to significantly extend the parameter range for which an advantage over the thermal Fisher information is attained. These results are explicitly demonstrated for dephasing interactions and also hold for partial swap interactions. Furthermore, we show that the optimal measurements can be performed locally, thus implying that genuine quantum correlations do not play a role in achieving this advantage. We explicitly confirm this by examining the correlation properties for the deterministic collisional model.


Author(s):  
Miguel Ángel Solís-Prosser ◽  
Omar Jiménez ◽  
Aldo Delgado ◽  
Leonardo Neves

Abstract The impossibility of deterministic and error-free discrimination among nonorthogonal quantum states lies at the core of quantum theory and constitutes a primitive for secure quantum communication. Demanding determinism leads to errors, while demanding certainty leads to some inconclusiveness. One of the most fundamental strategies developed for this task is the optimal unambiguous measurement. It encompasses conclusive results, which allow for error-free state retrodictions with the maximum success probability, and inconclusive results, which are discarded for not allowing perfect identifications. Interestingly, in high-dimensional Hilbert spaces the inconclusive results may contain valuable information about the input states. Here, we theoretically describe and experimentally demonstrate the discrimination of nonorthogonal states from both conclusive and inconclusive results in the optimal unambiguous strategy, by concatenating a minimum-error measurement at its inconclusive space. Our implementation comprises 4- and 9-dimensional spatially encoded photonic states. By accessing the inconclusive space to retrieve the information that is wasted in the conventional protocol, we achieve significant increases of up to a factor of 2.07 and 3.73, respectively, in the overall probabilities of correct retrodictions. The concept of concatenated optimal measurements demonstrated here can be extended to other strategies and will enable one to explore the full potential of high-dimensional nonorthogonal states for quantum communication with larger alphabets.


2021 ◽  
Vol 508 (2) ◽  
pp. 1632-1651
Author(s):  
Sukhdeep Singh

ABSTRACT We review the methodology for measurements of two-point functions of the cosmological observables, both power spectra and correlation functions. For pseudo-Cℓ estimators, we will argue that the window-weighted overdensity field can yield more optimal measurements as the window acts as an inverse noise weight, an effect that becomes more important for surveys with a variable selection function. We then discuss the impact of approximations made in the Master algorithm and suggest improvements, the iMaster algorithm, which uses the theoretical model to give unbiased results for arbitrarily complex windows provided that the model satisfies weak accuracy conditions. The methodology of iMaster algorithm is also generalized to the correlation functions to reconstruct the binned power spectra, for E/B mode separation, or to properly convolve the correlation functions to account for the scale cuts in the Fourier space model. We also show that the errors in the window estimation lead to both additive and multiplicative effects on the overdensity field. Accurate estimation of window power can be required up to scales of ∼2ℓmax or larger. Mis-estimation of the window power leads to biases in the measured power spectra, which scale as ${\delta C_\ell }\sim M^W_{\ell \ell ^{\prime }}\delta W_{\ell ^{\prime }}$, where the $M^W_{\ell \ell ^{\prime }}$ scales as ∼(2ℓ + 1)Cℓ leading to effects that can be important at high ℓ. While the notation in this paper is geared towards photometric galaxy surveys, the discussion is equally applicable to spectroscopic galaxy, intensity mapping, and Cosmic Microwave Background radiation (CMB) surveys.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Yifan Sun ◽  
Chaozhong Wu ◽  
Hui Zhang ◽  
Yijun Zhang ◽  
Shaopeng Li ◽  
...  

Contributions of measurements for detecting drowsy driving are determined by calculation parameters, which are directly related to the accuracy of drowsiness detection. The previous studies utilized the same Unified Calculation Parameters (UCPs) to compute each driver’s measurements. However, since each driver has unique driving behavior characteristics, namely, driver fingerprinting, Individual Drivers’ Best Calculation Parameters (IDBCPs) making measurements more discriminative for drowsiness are various. Regardless of the difference in driver fingerprinting among the drivers being tested, using UCPs instead of IDBCPs to compute measurements will limit the drowsiness-detection performance of the measurements and reduce drowsiness-detection accuracies at the individual driver level. Thus, this paper proposed a model to optimize calculation parameters of individual driver’s measurements and to extract individual driver’s measurements that effectively distinguish drowsy driving. Through real vehicle experiments, we collected naturalistic driving data and subjective drowsy levels evaluated by the Karolinska Sleepiness Scale. Eight nonintrusive drowsiness-related measurements were calculated by double-layer sliding time windows. In the proposed model, we firstly applied the Wilcoxon test to analyze differences between measurements of the awake state and drowsy state, and constructed the fitness function reflecting the relationship between the calculation parameters and measurement’s drowsiness-detection performance. Secondly, the genetic algorithms were used to optimize fitness functions to obtain measured IDBCPs. Finally, we selected measurements calculated by IDBCPs that can distinguish drowsy driving to constitute individual drivers’ optimal drowsiness-detection measurement set. To verify the advantages of IDBCPs, the measurements calculated by UCPs and IDBCPs were, respectively, used to build driver-specific drowsiness-detection models: DF_U and DF_I based on the Fisher discriminant algorithm. The mean drowsiness-detection accuracies of DF_U and DF_I were, respectively, 85.25% and 91.06%. It indicated that IDBCPs could enhance measurements’ drowsiness-detection performance and improve the drowsiness-detection accuracies. This paper contributed to the establishment of personalized drowsiness-detection models considering driver fingerprinting differences.


Author(s):  
Justin Finkel ◽  
Robert J. Webber ◽  
Edwin P. Gerber ◽  
Dorian S. Abbot ◽  
Jonathan Weare

AbstractRare events arising in nonlinear atmospheric dynamics remain hard to predict and attribute. We address the problem of forecasting rare events in a prototypical example, Sudden Stratospheric Warmings (SSWs). Approximately once every other winter, the boreal stratospheric polar vortex rapidly breaks down, shifting midlatitude surface weather patterns for months. We focus on two key quantities of interest: the probability of an SSW occurring, and the expected lead time if it does occur, as functions of initial condition. These optimal forecasts concretely measure the event’s progress. Direct numerical simulation can estimate them in principle, but is prohibitively expensive in practice: each rare event requires a long integration to observe, and the cost of each integration grows with model complexity. We describe an alternative approach using integrations that are short compared to the timescale of the warming event. We compute the probability and lead time efficiently by solving equations involving the transition operator, which encodes all information about the dynamics. We relate these optimal forecasts to a small number of interpretable physical variables, suggesting optimal measurements for forecasting. We illustrate the methodology on a prototype SSW model developed by Holton and Mass (1976) and modified by stochastic forcing. While highly idealized, this model captures the essential nonlinear dynamics of SSWs and exhibits the key forecasting challenge: the dramatic separation in timescales between a single event and the return time between successive events. Our methodology is designed to fully exploit high-dimensional data from models and observations, and has the potential to identify detailed predictors of many complex rare events in meteorology.


2021 ◽  
Vol 20 (7) ◽  
Author(s):  
Xuan-Hoai Thi Nguyen ◽  
Mahn-Soo Choi

AbstractIn contrast to the standard quantum state tomography, the direct tomography seeks a direct access to the complex values of the wave function at particular positions. Originally put forward as a special case of weak measurement, it has been extended to arbitrary measurement setup. We generalize the idea of “quantum metrology,” where a real-valued phase is estimated, to the estimation of complex-valued phase. We show that it enables to identify the optimal measurements and investigate the fundamental precision limit of the direct tomography. We propose a few experimentally feasible examples of direct tomography schemes and, based on the complex phase estimation formalism, demonstrate that direct tomography can reach the Heisenberg limit.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 414
Author(s):  
Seung-Woo Lee ◽  
Jaewan Kim ◽  
Hyunchul Nha

Quantum measurement is a basic tool to manifest intrinsic quantum effects from fundamental tests to quantum information applications. While a measurement is typically performed to gain information on a quantum state, its role in quantum technology is indeed manifold. For instance, quantum measurement is a crucial process element in measurement-based quantum computation. It is also used to detect and correct errors thereby protecting quantum information in error-correcting frameworks. It is therefore important to fully characterize the roles of quantum measurement encompassing information gain, state disturbance and reversibility, together with their fundamental relations. Numerous efforts have been made to obtain the trade-off between information gain and state disturbance, which becomes a practical basis for secure information processing. However, a complete information balance is necessary to include the reversibility of quantum measurement, which constitutes an integral part of practical quantum information processing. We here establish all pairs of trade-off relations involving information gain, disturbance, and reversibility, and crucially the one among all of them together. By doing so, we show that the reversibility plays a vital role in completing the information balance. Remarkably, our result can be interpreted as an information-conservation law of quantum measurement in a nontrivial form. We completely identify the conditions for optimal measurements that satisfy the conservation for each tradeoff relation with their potential applications. Our work can provide a useful guideline for designing a quantum measurement in accordance with the aims of quantum information processors.


2020 ◽  
Vol 5 (4) ◽  
pp. 045006
Author(s):  
Mohammad Yosefpor ◽  
Mohammad Reza Mostaan ◽  
Sadegh Raeisi

2020 ◽  
Author(s):  
Kiran Yadav

This paper examines the selected domain of agricultural statistics in India, and how it causes the sub-optimal measurements in national accounts statistics. The agriculture sector has a major role in GDP and also provides more than half of the total workforce their livelihood. Many of the state economies depend on agricultural-based activities. Hence, it is crucial to check the reliability of agricultural statistics and its correct measurement in national accounts for the correct valuation of the economy. The paper discusses the evolution of National Accounts Statistics in India and its embedment with the agriculture sector. The paper further discusses the statistics related to the land use and yield of crops and, - the method and sources used to collect this data. It also discusses the gap of checking the agricultural database and the challenges in the methodology used to estimate the sector contribution in national accounts


Sign in / Sign up

Export Citation Format

Share Document