scholarly journals Are Kansas farms profit maximizers? A stochastic additive error approach

2021 ◽  
Vol 52 (1) ◽  
pp. 37-50
Author(s):  
Yacob Abrehe Zereyesus ◽  
Allen M. Featherstone ◽  
Michael R. Langemeier
Keyword(s):  
Author(s):  
А.А. ПАВЛОВ ◽  
Ю.А. РОМАНЕНКО ◽  
А.Н. ЦАРЬКОВ ◽  
А.Ю. РОМАНЕНКО ◽  
А.А. МИХЕЕВ

Обоснована необходимость разработки методического аппарата, связанного с построением кода, корректирующего ошибки в заданном числе байтов информации с алгебраическим синдромным декодированием и оценкой аппаратурных и временных затрат, связанных с этой целью. Представлены правила построения корректирующего кода, исправляющего ошибки в заданном числе байтов информации, реализующего линейную процедуру построения корректирующего кода с синдромным декодированием и использованием аддитивного вектора ошибок, что позволило сократить аппаратурные затраты на построение декодирующего устройства (сократить объем памяти для хранения значений векторов ошибок). Получены выражения для оценки аппаратурных затрат на кодирование и декодирование информации при использовании предлагаемого метода коррекции пакетных ошибок. The necessity of developing a methodological apparatus related to the construction of a code that corrects errors in a given number of bytes of information with algebraic syndrome decoding and the estimation of hardware and time costs associated with this purpose is justified. The rules for constructing a correction code that corrects errors in a given number of bytes of information, implementing a linear procedure for constructing a correction code with syndrome decoding and using an additive error vector, are presented. This method made it possible to reduce the hardware costs for constructing a decoding device (reducing the amount of memory for storing the values of error vectors). Expressions are obtained for estimating the hardware costs of encoding and decoding information when using the proposed method of correcting packet errors.


Author(s):  
Ran Ben Basat ◽  
Gil Einziger ◽  
Michael Mitzenmacher ◽  
Shay Vargaftik
Keyword(s):  

Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 329
Author(s):  
Tomoyuki Morimae ◽  
Suguru Tamaki

It is known that several sub-universal quantum computing models, such as the IQP model, the Boson sampling model, the one-clean qubit model, and the random circuit model, cannot be classically simulated in polynomial time under certain conjectures in classical complexity theory. Recently, these results have been improved to ``fine-grained" versions where even exponential-time classical simulations are excluded assuming certain classical fine-grained complexity conjectures. All these fine-grained results are, however, about the hardness of strong simulations or multiplicative-error sampling. It was open whether any fine-grained quantum supremacy result can be shown for a more realistic setup, namely, additive-error sampling. In this paper, we show the additive-error fine-grained quantum supremacy (under certain complexity assumptions). As examples, we consider the IQP model, a mixture of the IQP model and log-depth Boolean circuits, and Clifford+T circuits. Similar results should hold for other sub-universal models.


2020 ◽  
pp. 1831
Author(s):  
Abbas Zedan Khalaf ◽  
Bashar H Alyasery

In this study, an approach inspired by a standardized calibration method was used to test a laser distance meter (LDM). A laser distance sensor (LDS) was tested with respect to an LDM and then a statistical indicator explained that the former functions in a similar manner as the latter. Also, regression terms were used to estimate the additive error and scale the correction of the sensors. The specified distance was divided into several parts with percent of longest one and observed using two sensors, left and right. These sensors were evaluated by using the regression between the measured and the reference values. The results were computed using MINITAB 17 package software and excel office package. The accuracy of the results in this work was ± 4.4mm + 50.89 ppm and ± 4.96mm + 99.88 ppm for LDS1 and LDS2, respectively, depending on the LDM accuracy which was computed to the full range (100 m). Using these sensors can be very effective for industrial, 3D modeling purposes, and many other applications, especially that it is inexpensive and available in many versions.


Symmetry ◽  
2019 ◽  
Vol 11 (9) ◽  
pp. 1107
Author(s):  
Javier Cuesta

We study the relation between almost-symmetries and the geometry of Banach spaces. We show that any almost-linear extension of a transformation that preserves transition probabilities up to an additive error admits an approximation by a linear map, and the quality of the approximation depends on the type and cotype constants of the involved spaces.


2019 ◽  
Vol 575 ◽  
pp. 1031-1040 ◽  
Author(s):  
Omar Wani ◽  
Andreas Scheidegger ◽  
Francesca Cecinati ◽  
Gabriel Espadas ◽  
Jörg Rieckermann

1996 ◽  
Vol 31 (3) ◽  
pp. 284-293 ◽  
Author(s):  
T.V. Burmas ◽  
K.C. Dyer ◽  
P.J. Hurst ◽  
S.H. Lewis

2014 ◽  
Vol 72 (1) ◽  
pp. 130-136 ◽  
Author(s):  
Saang-Yoon Hyun ◽  
Mark N. Maunder ◽  
Brian J. Rothschild

Abstract Many fish stock assessments use a survey index and assume a stochastic error in the index on which a likelihood function of associated parameters is built and optimized for the parameter estimation. The purpose of this paper is to evaluate the assumption that the standard deviation for the difference in the log-transformed index is approximately equal to the coefficient of variation of the index, and also to examine the homo- and heteroscedasticity of the errors. The traditional practice is to assume a common variance of the index errors over time for estimation convenience. However, if additional information is available about year-to-year variability in the errors, such as year-to-year coefficient of variation, then we suggest that the heteroscedasticity assumption should be considered. We examined five methods with the assumption of a multiplicative error in the survey index and two methods with that of an additive error in the index: M1, homoscedasticity in the multiplicative error model; M2, heteroscedasticity in the multiplicative error model; M3, M2 with approximate weighting and an additional parameter for scaling variance; M4–M5, pragmatic practices; M6, homoscedasticity in the additive error model; M7, heteroscedasticity in the additive error model. M1–M2 and M6–M7 are strictly based on statistical theories, whereas M3–M5 are not. Heteroscedasticity methods M2, M3, and M7 consistently outperformed the other methods. However, we select M2 as the best method. M3 requires one more parameter than M2. M7 has problems arising from the use of the raw scale as opposed to the logarithm transformation. Furthermore, the fitted survey index in M7 can be negative although its domain is positive.


Sign in / Sign up

Export Citation Format

Share Document