estimation quality
Recently Published Documents


TOTAL DOCUMENTS

59
(FIVE YEARS 22)

H-INDEX

6
(FIVE YEARS 2)

Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 95
Author(s):  
Pontus Söderbäck ◽  
Jörgen Blomvall ◽  
Martin Singull

Liquid financial markets, such as the options market of the S&P 500 index, create vast amounts of data every day, i.e., so-called intraday data. However, this highly granular data is often reduced to single-time when used to estimate financial quantities. This under-utilization of the data may reduce the quality of the estimates. In this paper, we study the impacts on estimation quality when using intraday data to estimate dividends. The methodology is based on earlier linear regression (ordinary least squares) estimates, which have been adapted to intraday data. Further, the method is also generalized in two aspects. First, the dividends are expressed as present values of future dividends rather than dividend yields. Second, to account for heteroscedasticity, the estimation methodology was formulated as a weighted least squares, where the weights are determined from the market data. This method is compared with a traditional method on out-of-sample S&P 500 European options market data. The results show that estimations based on intraday data have, with statistical significance, a higher quality than the corresponding single-times estimates. Additionally, the two generalizations of the methodology are shown to improve the estimation quality further.


2021 ◽  
Vol 2 ◽  
Author(s):  
Hajer Srihi ◽  
Thierry-Marie Guerra ◽  
Anh-Tu Nguyen ◽  
Philippe Pudlo ◽  
Antoine Dequidt

People with spinal cord injury (SCI) suffer from a drastic reduction in sitting stability which negatively impacts their postural control. Thus, sitting balance becomes one of the most challenging everyday exercises. To better understand the consequences of this pathology, we have to work with high-sized non-linear biomechanical models implying both theoretical and numerical difficulties. The main goal being to recover unmeasured inputs, the observer should have limited or no simplification at all to provide a better estimation quality. A Proportional Integral-observer (PI-observer) is designed and its convergence is formulated by linear matrix inequalities (LMI) through convex optimization techniques. Using a unique high-sized observer, the LMI constraints problem can quickly reach current solvers limitations regarding the number of unknown parameters required. A way to solve this issue is to design a cascade observer in order to estimate the unmeasurable torques of a human with SCI. This approach consists in decomposing a biomechanical model into interconnected subsystems and to build “local” observers. The relevance of this approach is demonstrated in simulation and with real-time experimental data.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6219
Author(s):  
Petar D. Milanović ◽  
Ilija V. Popadić ◽  
Branko D. Kovačević

Video stabilization is essential for long-range electro-optical systems, especially in situations when the field of view is narrow, since the system shake may produce highly deteriorating effects. It is important that the stabilization works for different camera types, i.e., different parts of the electromagnetic spectrum independently of the weather conditions and any form of image distortion. In this paper, we propose a method for real-time video stabilization that uses only gyroscope measurements, analyze its performance, and implement and validate it on a real-world professional electro-optical system developed at Vlatacom Institute. Camera movements are modeled with 3D rotations obtained by integration of MEMS gyroscope measurements. The 3D orientation estimation quality depends on the gyroscope characteristics; we provide a detailed discussion on the criteria for gyroscope selection in terms of the sensitivity, measurement noise, and drift stability. Furthermore, we propose a method for improving the unwanted motion estimation quality using interpolation in the quaternion domain. We also propose practical solutions for eliminating disturbances originating from gyro bias instability and noise. In order to evaluate the quality of our solution, we compared the performance of our implementation with two feature-based digital stabilization methods. The general advantage of the proposed methods is its drastically lower computational complexity; hence, it can be implemented for a low price independent of the used electro-optical sensor system.


Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2215
Author(s):  
Shitong Cui ◽  
Le Liu ◽  
Wei Xing ◽  
Xudong Zhao

This paper considers the problem of remote state estimation in a linear discrete invariant system, where a smart sensor is utilized to measure the system state and generate a local estimate. The communication depends on an event scheduler in the smart sensor. When the channel between the remote estimator and the smart sensor is activated, the remote estimator simply adopts the estimate transmitted by the smart sensor. Otherwise, it calculates an estimate based on the available information. The closed-form of the minimum mean-square error (MMSE) estimator is introduced, and we use Gaussian preserving event-based sensor scheduling to obtain an ideal compromise between the communication cost and estimation quality. Furthermore, we calculate a variation range of communication probability, which helps to design the policy of event-triggered estimation. Finally, the simulation results are given to illustrate the effectiveness of the proposed event-triggered estimator.


2021 ◽  
Vol 15 (1) ◽  
pp. 72-84
Author(s):  
Jiayi Wang ◽  
Chengliang Chai ◽  
Jiabin Liu ◽  
Guoliang Li

Cardinality estimation is one of the most important problems in query optimization. Recently, machine learning based techniques have been proposed to effectively estimate cardinality, which can be broadly classified into query-driven and data-driven approaches. Query-driven approaches learn a regression model from a query to its cardinality; while data-driven approaches learn a distribution of tuples, select some samples that satisfy a SQL query, and use the data distributions of these selected tuples to estimate the cardinality of the SQL query. As query-driven methods rely on training queries, the estimation quality is not reliable when there are no high-quality training queries; while data-driven methods have no such limitation and have high adaptivity. In this work, we focus on data-driven methods. A good data-driven model should achieve three optimization goals. First, the model needs to capture data dependencies between columns and support large domain sizes (achieving high accuracy). Second, the model should achieve high inference efficiency, because many data samples are needed to estimate the cardinality (achieving low inference latency). Third, the model should not be too large (achieving a small model size). However, existing data-driven methods cannot simultaneously optimize the three goals. To address the limitations, we propose a novel cardinality estimator FACE, which leverages the Normalizing Flow based model to learn a continuous joint distribution for relational data. FACE can transform a complex distribution over continuous random variables into a simple distribution (e.g., multivariate normal distribution), and use the probability density to estimate the cardinality. First, we design a dequantization method to make data more "continuous". Second, we propose encoding and indexing techniques to handle Like predicates for string data. Third, we propose a Monte Carlo method to efficiently estimate the cardinality. Experimental results show that our method significantly outperforms existing approaches in terms of estimation accuracy while keeping similar latency and model size.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Biyun Yang ◽  
Yong Xu

AbstractDeep learning is known as a promising multifunctional tool for processing images and other big data. By assimilating large amounts of heterogeneous data, deep-learning technology provides reliable prediction results for complex and uncertain phenomena. Recently, it has been increasingly used by horticultural researchers to make sense of the large datasets produced during planting and postharvest processes. In this paper, we provided a brief introduction to deep-learning approaches and reviewed 71 recent research works in which deep-learning technologies were applied in the horticultural domain for variety recognition, yield estimation, quality detection, stress phenotyping detection, growth monitoring, and other tasks. We described in detail the application scenarios reported in the relevant literature, along with the applied models and frameworks, the used data, and the overall performance results. Finally, we discussed the current challenges and future trends of deep learning in horticultural research. The aim of this review is to assist researchers and provide guidance for them to fully understand the strengths and possible weaknesses when applying deep learning in horticultural sectors. We also hope that this review will encourage researchers to explore some significant examples of deep learning in horticultural science and will promote the advancement of intelligent horticulture.


Author(s):  
Damek Davis ◽  
Dmitriy Drusvyatskiy

We investigate the stochastic optimization problem of minimizing population risk, where the loss defining the risk is assumed to be weakly convex. Compositions of Lipschitz convex functions with smooth maps are the primary examples of such losses. We analyze the estimation quality of such nonsmooth and nonconvex problems by their sample average approximations. Our main results establish dimension-dependent rates on subgradient estimation in full generality and dimension-independent rates when the loss is a generalized linear model. As an application of the developed techniques, we analyze the nonsmooth landscape of a robust nonlinear regression problem.


Author(s):  
Marina B. A. Souza ◽  
Leonardo de Mello Honório ◽  
Edimar José de Oliveira

Sign in / Sign up

Export Citation Format

Share Document