Probabilistic Bounds for a Class of Filtering Algorithms in the Scalar Case

Author(s):  
Shihong Wei ◽  
James C. Spall
2015 ◽  
Vol 3 (3) ◽  
pp. 30-34 ◽  
Author(s):  
B. Anitha ◽  
◽  
Srinivas Bachu ◽  
C. Sailaja ◽  
◽  
...  

2021 ◽  
Vol 70 ◽  
pp. 1-10
Author(s):  
Domenico Capriglione ◽  
Marco Carratu ◽  
Marcantonio Catelani ◽  
Lorenzo Ciani ◽  
Gabriele Patrizi ◽  
...  

2020 ◽  
Vol 23 (3) ◽  
pp. 723-752 ◽  
Author(s):  
Alessio Fiscella ◽  
Patrizia Pucci

AbstractThis paper deals with the existence of nontrivial solutions for critical possibly degenerate Kirchhoff fractional (p, q) systems. For clarity, the results are first presented in the scalar case, and then extended into the vectorial framework. The main features and novelty of the paper are the (p, q) growth of the fractional operator, the double lack of compactness as well as the fact that the systems can be degenerate. As far as we know the results are new even in the scalar case and when the Kirchhoff model considered is non–degenerate.


2021 ◽  
Author(s):  
Alina Kloss ◽  
Georg Martius ◽  
Jeannette Bohg

AbstractIn many robotic applications, it is crucial to maintain a belief about the state of a system, which serves as input for planning and decision making and provides feedback during task execution. Bayesian Filtering algorithms address this state estimation problem, but they require models of process dynamics and sensory observations and the respective noise characteristics of these models. Recently, multiple works have demonstrated that these models can be learned by end-to-end training through differentiable versions of recursive filtering algorithms. In this work, we investigate the advantages of differentiable filters (DFs) over both unstructured learning approaches and manually-tuned filtering algorithms, and provide practical guidance to researchers interested in applying such differentiable filters. For this, we implement DFs with four different underlying filtering algorithms and compare them in extensive experiments. Specifically, we (i) evaluate different implementation choices and training approaches, (ii) investigate how well complex models of uncertainty can be learned in DFs, (iii) evaluate the effect of end-to-end training through DFs and (iv) compare the DFs among each other and to unstructured LSTM models.


2013 ◽  
Vol 61 (7) ◽  
pp. 1689-1697 ◽  
Author(s):  
Zulfiquar Ali Bhotto ◽  
Andreas Antoniou

Sign in / Sign up

Export Citation Format

Share Document