scholarly journals On the Optimal Calculation of the Rice Coding Parameter

Algorithms ◽  
2020 ◽  
Vol 13 (8) ◽  
pp. 181
Author(s):  
Fernando Solano Donado

In this article, we design and evaluate several algorithms for the computation of the optimal Rice coding parameter. We conjecture that the optimal Rice coding parameter can be bounded and verify this conjecture through numerical experiments using real data. We also describe algorithms that partition the input sequence of data into sub-sequences, such that if each sub-sequence is coded with a different Rice parameter, the overall code length is minimised. An algorithm for finding the optimal partitioning solution for Rice codes is proposed, as well as fast heuristics, based on the understanding of the problem trade-offs.

Author(s):  
Richard Steinberg ◽  
Raytheon Company ◽  
Alice Diggs ◽  
Raytheon Company ◽  
Jade Driggs

Verification and validation (V&V) for human performance models (HPMs) can be likened to building a house with no bricks, since it is difficult to obtain metrics to validate a model when the system is still in development. HPMs are effective for performing trade-offs between the human system designs factors including number of operators needed, the role of automated tasks versus operator tasks, and member task responsibilities required to operate a system. On a recent government contract, our team used a human performance model to provide additional analysis beyond traditional trade studies. Our team verified the contractually mandated staff size for using the system. This task demanded that the model have sufficient fidelity to provide information for high confidence staffing decisions. It required a method for verifying and validating the model and its results to ensure that it accurately reflected the real world. The situation caused a dilemma because there was no actual system to gather real data to use to validate the model. It is a challenge to validate human performance models, since they support design decisions prior to system. For example, crew models are typically inform the design, staffing needs, and the requirements for each operator’s user interface prior to development. This paper discusses a successful case study for how our team met the V&V challenges with the US Air Force model accreditation authority and successfully accredited our human performance model with enough fidelity for requirements testing on an Air Force Command and Control program.


2021 ◽  
Vol 13 (9) ◽  
pp. 222
Author(s):  
Raffaele D'Ambrosio ◽  
Giuseppe Giordano ◽  
Serena Mottola ◽  
Beatrice Paternoster

This work highlights how the stiffness index, which is often used as a measure of stiffness for differential problems, can be employed to model the spread of fake news. In particular, we show that the higher the stiffness index is, the more rapid the transit of fake news in a given population. The illustration of our idea is presented through the stiffness analysis of the classical SIR model, commonly used to model the spread of epidemics in a given population. Numerical experiments, performed on real data, support the effectiveness of the approach.


Author(s):  
Vasileios Charisopoulos ◽  
Damek Davis ◽  
Mateo Díaz ◽  
Dmitriy Drusvyatskiy

Abstract We consider the task of recovering a pair of vectors from a set of rank one bilinear measurements, possibly corrupted by noise. Most notably, the problem of robust blind deconvolution can be modeled in this way. We consider a natural nonsmooth formulation of the rank one bilinear sensing problem and show that its moduli of weak convexity, sharpness and Lipschitz continuity are all dimension independent, under favorable statistical assumptions. This phenomenon persists even when up to half of the measurements are corrupted by noise. Consequently, standard algorithms, such as the subgradient and prox-linear methods, converge at a rapid dimension-independent rate when initialized within a constant relative error of the solution. We complete the paper with a new initialization strategy, complementing the local search algorithms. The initialization procedure is both provably efficient and robust to outlying measurements. Numerical experiments, on both simulated and real data, illustrate the developed theory and methods.


2003 ◽  
Vol 33 (2) ◽  
pp. 365-381 ◽  
Author(s):  
Vytaras Brazauskas ◽  
Robert Serfling

Several recent papers treated robust and efficient estimation of tail index parameters for (equivalent) Pareto and truncated exponential models, for large and small samples. New robust estimators of “generalized median” (GM) and “trimmed mean” (T) type were introduced and shown to provide more favorable trade-offs between efficiency and robustness than several well-established estimators, including those corresponding to methods of maximum likelihood, quantiles, and percentile matching. Here we investigate performance of the above mentioned estimators on real data and establish — via the use of goodness-of-fit measures — that favorable theoretical properties of the GM and T type estimators translate into an excellent practical performance. Further, we arrive at guidelines for Pareto model diagnostics, testing, and selection of particular robust estimators in practice. Model fits provided by the estimators are ranked and compared on the basis of Kolmogorov-Smirnov, Cramér-von Mises, and Anderson-Darling statistics.


Geophysics ◽  
2020 ◽  
Vol 85 (1) ◽  
pp. U1-U20
Author(s):  
Yanadet Sripanich ◽  
Sergey Fomel ◽  
Jeannot Trampert ◽  
William Burnett ◽  
Thomas Hess

Parameter estimation from reflection moveout analysis represents one of the most fundamental problems in subsurface model building. We have developed an efficient moveout inversion method based on the process of automatic flattening of common-midpoint (CMP) gathers using local slopes. We find that as a by-product of this flattening process, we can also estimate reflection traveltimes corresponding to the flattened CMP gathers. This traveltime information allows us to construct a highly overdetermined system and subsequently invert for moveout parameters including normal-moveout velocities and quartic coefficients related to anisotropy. We use the 3D generalized moveout approximation (GMA), which can accurately capture the effects of complex anisotropy on reflection traveltimes as the basis for our moveout inversion. Due to the cheap forward traveltime computations by GMA, we use a Monte Carlo inversion scheme for improved handling of the nonlinearity between the reflection traveltimes and moveout parameters. This choice also allows us to set up a probabilistic inversion workflow within a Bayesian framework, in which we can obtain the posterior probability distributions that contain valuable statistical information on estimated parameters such as uncertainty and correlations. We use synthetic and real data examples including the data from the SEAM Phase II unconventional reservoir model to demonstrate the performance of our method and discuss insights into the problem of moveout inversion gained from analyzing the posterior probability distributions. Our results suggest that the solutions to the problem of traveltime-only moveout inversion from 2D CMP gathers are relatively well constrained by the data. However, parameter estimation from 3D CMP gathers associated with more moveout parameters and complex anisotropic models are generally nonunique, and there are trade-offs among inverted parameters, especially the quartic coefficients.


2019 ◽  
Vol 53 (2) ◽  
pp. 559-576 ◽  
Author(s):  
Pascal Schroeder ◽  
Imed Kacem ◽  
Günter Schmidt

In this work we investigate the portfolio selection problem (P1) and bi-directional trading (P2) when prices are interrelated. Zhang et al. (J. Comb. Optim. 23 (2012) 159–166) provided the algorithm UND which solves one variant of P2. We are interested in solutions which are optimal from a worst-case perspective. For P1, we prove the worst-case input sequence and derive the algorithm optimal portfolio for interrelated prices (OPIP). We then prove the competitive ratio and optimality. We use the idea of OPIP to solve P2 and derive the algorithm called optimal conversion for interrelated prices (OCIP). Using OCIP, we also design optimal online algorithms for bi-directional search (P3) called bi-directional UND (BUND) and optimal online search for unknown relative price bounds (RUN). We run numerical experiments and conclude that OPIP and OCIP perform well compared to other algorithms even if prices do not behave adverse.


Author(s):  
LEV V. UTKIN

A new approach for ensemble construction based on restricting a set of weights of examples in training data to avoid overfitting is proposed in the paper. The algorithm called EPIBoost (Extreme Points Imprecise Boost) applies imprecise statistical models to restrict the set of weights. The updating of the weights within the restricted set is carried out by using its extreme points. The approach allows us to construct various algorithms by applying different imprecise statistical models for producing the restricted set. It is shown by various numerical experiments with real data sets that the EPIBoost algorithm may outperform the standard AdaBoost for some parameters of imprecise statistical models.


2018 ◽  
Vol 8 (9) ◽  
pp. 1676 ◽  
Author(s):  
Vincent Gripon ◽  
Matthias Löwe ◽  
Franck Vermet

Nearest neighbor search is a very active field in machine learning. It appears in many application cases, including classification and object retrieval. In its naive implementation, the complexity of the search is linear in the product of the dimension and the cardinality of the collection of vectors into which the search is performed. Recently, many works have focused on reducing the dimension of vectors using quantization techniques or hashing, while providing an approximate result. In this paper, we focus instead on tackling the cardinality of the collection of vectors. Namely, we introduce a technique that partitions the collection of vectors and stores each part in its own associative memory. When a query vector is given to the system, associative memories are polled to identify which one contains the closest match. Then, an exhaustive search is conducted only on the part of vectors stored in the selected associative memory. We study the effectiveness of the system when messages to store are generated from i.i.d. uniform ±1 random variables or 0–1 sparse i.i.d. random variables. We also conduct experiments on both synthetic data and real data and show that it is possible to achieve interesting trade-offs between complexity and accuracy.


2021 ◽  
Vol 49 (4) ◽  
pp. 86-101
Author(s):  
T. O. Sheloput ◽  
V. I. Agoshkov

The problems of modeling hydrothermodynamics of particular sea and coastal areas are of current interest, since the results of this modeling are often used in many applications. One of the methods allowing to take into account open boundaries and bring the simulation results closer to real data is the variational assimilation of observational data. In this paper the following approach is considered: it is supposed that there are observational data at a certain moment in time; the problem is considered as an inverse problem, in which the functions of fluxes across the open boundary are treated as additional unknowns. Comparison of methods for reconstructing unknown functions in boundary conditions at an open boundary using sea level and velocity observational data in a number of numerical experiments for a region of a simple shape is carried out.


2021 ◽  
Author(s):  
Qifeng Wan ◽  
Xuanhua Xu ◽  
Kyle Hunt ◽  
Jun Zhuang

During the COVID-19 pandemic, staying home proved to be an effective way to mitigate the spread of the virus. Stay-at-home orders and guidelines were issued by governments across the globe and were followed by a large portion of the population in the early stages of the outbreak when there was a lack of COVID-specific medical knowledge. The decision of whether to stay home came with many trade-offs, such as risking personal exposure to the virus when leaving home or facing financial and mental health burdens when remaining home. In this research, we study how individuals make strategic decisions to balance these conflicting outcomes. We present a model to study individuals’ decision making based on decision and prospect theory, and we conduct sensitivity analysis to study the fluctuations in optimal strategies when there are changes made to the model’s parameters. A Monte Carlo simulation is implemented to further study the performance of our model, and we compare our simulation results with real data that captures individuals’ stay-at-home decisions. Overall, this research models and analyzes the behaviors of individuals during the COVID-19 pandemic and can help support decision making regarding control measures and policy development when public health emergencies appear in the future.


Sign in / Sign up

Export Citation Format

Share Document