regression problems
Recently Published Documents


TOTAL DOCUMENTS

526
(FIVE YEARS 129)

H-INDEX

39
(FIVE YEARS 5)

Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 354
Author(s):  
Haoyi Ma ◽  
Scott T. Acton ◽  
Zongli Lin

Accurate and robust scale estimation in visual object tracking is a challenging task. To obtain a scale estimation of the target object, most methods rely either on a multi-scale searching scheme or on refining a set of predefined anchor boxes. These methods require heuristically selected parameters, such as scale factors of the multi-scale searching scheme, or sizes and aspect ratios of the predefined candidate anchor boxes. On the contrary, a centerness-aware anchor-free tracker (CAT) is designed in this work. First, the location and scale of the target object are predicted in an anchor-free fashion by decomposing tracking into parallel classification and regression problems. The proposed anchor-free design obviates the need for hyperparameters related to the anchor boxes, making CAT more generic and flexible. Second, the proposed centerness-aware classification branch can identify the foreground from the background while predicting the normalized distance from the location within the foreground to the target center, i.e., the centerness. The proposed centerness-aware classification branch improves the tracking accuracy and robustness significantly by suppressing low-quality state estimates. The experiments show that our centerness-aware anchor-free tracker, with its appealing features, achieves salient performance in a wide variety of tracking scenarios.


2022 ◽  
pp. 283-305
Author(s):  
Veronica K. Chan ◽  
Christine W. Chan

This chapter discusses development, application, and enhancement of a decomposition neural network rule extraction algorithm for nonlinear regression problems. The dual objectives of developing the algorithms are (1) to generate good predictive models comparable in performance to the original artificial neural network (ANN) models and (2) to “open up” the black box of a neural network model and provide explicit information in the form of rules that are expressed as linear equations. The enhanced PWL-ANN algorithm improves upon the PWL-ANN algorithm because it can locate more than two breakpoints and better approximate the hidden sigmoid activation functions of the ANN. Comparison of the results produced by the two versions of the PWL-ANN algorithm showed that the enhanced PWL-ANN models provide higher predictive accuracies and improved fidelities compared to the originally trained ANN models than the PWL-ANN models.


2021 ◽  
pp. 1-39
Author(s):  
Jochen Schmid

We deal with monotonic regression of multivariate functions [Formula: see text] on a compact rectangular domain [Formula: see text] in [Formula: see text], where monotonicity is understood in a generalized sense: as isotonicity in some coordinate directions and antitonicity in some other coordinate directions. As usual, the monotonic regression of a given function [Formula: see text] is the monotonic function [Formula: see text] that has the smallest (weighted) mean-squared distance from [Formula: see text]. We establish a simple general approach to compute monotonic regression functions: namely, we show that the monotonic regression [Formula: see text] of a given function [Formula: see text] can be approximated arbitrarily well — with simple bounds on the approximation error in both the [Formula: see text]-norm and the [Formula: see text]-norm — by the monotonic regression [Formula: see text] of grid-constant functions [Formula: see text]. monotonic regression algorithms. We also establish the continuity of the monotonic regression [Formula: see text] of a continuous function [Formula: see text] along with an explicit averaging formula for [Formula: see text]. And finally, we deal with generalized monotonic regression where the mean-squared distance from standard monotonic regression is replaced by more complex distance measures which arise, for instance, in maximum smoothed likelihood estimation. We will see that the solution of such generalized monotonic regression problems is simply given by the standard monotonic regression [Formula: see text].


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Shambhavi Mishra ◽  
Tanveer Ahmed ◽  
Vipul Mishra ◽  
Manjit Kaur ◽  
Thomas Martinetz ◽  
...  

This paper proposes a multivariate and online prediction of stock prices via the paradigm of kernel adaptive filtering (KAF). The prediction of stock prices in traditional classification and regression problems needs independent and batch-oriented nature of training. In this article, we challenge this existing notion of the literature and propose an online kernel adaptive filtering-based approach to predict stock prices. We experiment with ten different KAF algorithms to analyze stocks’ performance and show the efficacy of the work presented here. In addition to this, and in contrast to the current literature, we look at granular level data. The experiments are performed with quotes gathered at the window of one minute, five minutes, ten minutes, fifteen minutes, twenty minutes, thirty minutes, one hour, and one day. These time windows represent some of the common windows frequently used by traders. The proposed framework is tested on 50 different stocks making up the Indian stock index: Nifty-50. The experimental results show that online learning and KAF is not only a good option, but practically speaking, they can be deployed in high-frequency trading as well.


2021 ◽  
Author(s):  
Wei Zhang ◽  
Zhen He ◽  
Di WANG

Abstract Distribution regression is the regression case where the input objects are distributions. Many machine learning problems can be analysed in this framework, such as multi-instance learning and learning from noisy data. This paper attempts to build a conformal predictive system(CPS) for distribution regression, where the prediction of the system for a test input is a cumulative distribution function(CDF) of the corresponding test label. The CDF output by a CPS provides useful information about the test label, as it can estimate the probability of any event related to the label and be transformed to prediction interval and prediction point with the help of the corresponding quantiles. Furthermore, a CPS has the property of validity as the prediction CDFs and the prediction intervals are statistically compatible with the realizations. This property is desired for many risk-sensitive applications, such as weather forecast. To the best of our knowledge, this is the first work to extend the learning framework of CPS to distribution regression problems. We first embed the input distributions to a reproducing kernel Hilbert space using kernel mean embedding approximated by random Fourier features, and then build a fast CPS on the top of the embeddings. While inheriting the property of validity from the learning framework of CPS, our algorithm is simple, easy to implement and fast. The proposed approach is tested on synthetic data sets and can be used to tackle the problem of statistical postprocessing of ensemble forecasts, which demonstrates the effectiveness of our algorithm for distribution regression problems.


Sign in / Sign up

Export Citation Format

Share Document