recursive algorithm
Recently Published Documents


TOTAL DOCUMENTS

935
(FIVE YEARS 140)

H-INDEX

38
(FIVE YEARS 4)

Robotics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 15
Author(s):  
Fernando Gonçalves ◽  
Tiago Ribeiro ◽  
António Fernando Ribeiro ◽  
Gil Lopes ◽  
Paulo Flores

Forward kinematics is one of the main research fields in robotics, where the goal is to obtain the position of a robot’s end-effector from its joint parameters. This work presents a method for achieving this using a recursive algorithm that builds a 3D computational model from the configuration of a robotic system. The orientation of the robot’s links is determined from the joint angles using Euler Angles and rotation matrices. Kinematic links are modeled sequentially, the properties of each link are defined by its geometry, the geometry of its predecessor in the kinematic chain, and the configuration of the joint between them. This makes this method ideal for tackling serial kinematic chains. The proposed method is advantageous due to its theoretical increase in computational efficiency, ease of implementation, and simple interpretation of the geometric operations. This method is tested and validated by modeling a human-inspired robotic mobile manipulator (CHARMIE) in Python.


2022 ◽  
Vol Volume 18, Issue 1 ◽  
Author(s):  
Karoliina Lehtinen ◽  
Paweł Parys ◽  
Sven Schewe ◽  
Dominik Wojtczak

Zielonka's classic recursive algorithm for solving parity games is perhaps the simplest among the many existing parity game algorithms. However, its complexity is exponential, while currently the state-of-the-art algorithms have quasipolynomial complexity. Here, we present a modification of Zielonka's classic algorithm that brings its complexity down to $n^{O\left(\log\left(1+\frac{d}{\log n}\right)\right)}$, for parity games of size $n$ with $d$ priorities, in line with previous quasipolynomial-time solutions.


2022 ◽  
Vol 4 (1) ◽  
Author(s):  
Rafika Husnia Munfa'ati ◽  
Sugi Guritman ◽  
Bib Paruhum Silalahi

Information data protection is necessary to ward off and overcome various fraud attacks that may be encountered. A secret sharing scheme that implements cryptographic methods intends to maintain the security of confidential data by a group of trusted parties is the answer. In this paper, we choose the application of recursive algorithm on Shamir-based linear scheme as the primary method. In the secret reconstruction stage and since the beginning of the share distribution stage, these algorithms have been integrated by relying on a detection parameter to ensure that the secret value sought is valid. Although the obtained scheme will be much simpler because it utilizes the Vandermonde matrix structure, the security aspect of this scheme is not reduced. Indeed, it is supported by two detection parameters formulated from a recursive algorithm to detect cheating and identify the cheater(s). Therefore, this scheme is guaranteed to be unconditionally secure and has a high time efficiency (polynomial running time).


2021 ◽  
Vol 958 (1) ◽  
pp. 012025
Author(s):  
R Tawegoum

Abstract Predicting hourly potential evapotranspiration is particularly important in constrained horticultural nurseries. This paper presents a three-step-ahead predictor of potential evapotranspiration for horticultural nurseries under unsettled weather conditions or climate sensor failure. The Seasonal AutoRegressive Integrated Moving Average model based on climate data was used to derive a predictor using data generated according to prior knowledge of the system behavior; the aim of the predictor was to compensate for missing data that are usually not considered in standard forecasting approaches. The generated data also offer the opportunity to capture variations of the model parameters due to abrupt changes in local climate conditions. A recursive algorithm was used to estimate parameter variation, and the Kalman filter to model the state of the system. The simulations for steady-state weather and unsettled weather conditions showed that the predictor could forecast potential evapotranspiration more accurately than the standard approach did. These results are encouraging within the context of predictive irrigation scheduling in nurseries.


2021 ◽  
Vol 2131 (3) ◽  
pp. 032012
Author(s):  
V P Bubnov ◽  
Sh Kh Sultonov

Abstract The paper considers a new approach to building models of nonstationary service systems based on: the formation of all possible states of a nonstationary service system with a finite number of applications and rules of transition between them; the formation of the coefficient matrix of Chapman-Kolmogorov differential equation system; the numbering procedure for all states. A critical analysis is made of the algorithms for the formation of the coefficient matrix and the numbering procedure for all states: sequential, recursive and recursive with grouping. Its comparison with the recursive algorithm is given, as well as the optimal structure for storing the list of states for the sequential algorithm. Recommendations for the practical application of software implementations of the considered algorithms are discussed. Theoretical foundations for building and calculating models of nonstationary service systems have been developed. It is compared to the recursive algorithm. The optimal structure for storing the list of states for a sequential algorithm is given.


2021 ◽  
Vol 5 (Supplement_1) ◽  
pp. 819-819
Author(s):  
Natalia Gouskova ◽  
Dae Kim ◽  
Sandra Shi ◽  
Thomas Travison

Abstract Often it is necessary to evaluate effectiveness of an intervention on the basis of multiple event outcomes of variable benefit and harm, which may develop over time. An attractive approach is to order combinations of these events based on desirability of the overall outcome (e.g. from cure without any adverse events to death), and then determine whether the intervention shifts the distribution of these ordered outcomes towards more desirable (Evans, Follmann 2016). The win ratio introduced in Pocock et al 2012 was an earlier implementation of this approach. More recently Claggett et al 2015 proposed a more comprehensive method allowing nonparametric and regression-based inference in presence of competing risks. Key to the method is weighting observations by inverse probability of censoring (IPC) processes specific to participants and event types. The method has seemingly great practical utility, but computation of weights is a non-trivial challenge with real-life data when each event can have its own censoring time. We present a novel recursive algorithm solving this problem for an arbitrary number of events ordered by clinical importance or desirability. The algorithm can be implemented in SAS or R software, and computes IPC weights, as well as nonparametric or parametric estimates and resampling-based measures of uncertainty. We illustrate the approach using data from the SPRINT trial of antihypertensive intervention, comparing risk-benefit profiles for robust, pre-frail, and frail subpopulations, and in analysis of fall as a function of progressive risk factors. More general use of the software tools deploying the method is described.


2021 ◽  
Author(s):  
◽  
Sergio I. Hernandez

<p>Tracking multiple objects is a challenging problem for an automated system, with applications in many domains. Typically the system must be able to represent the posterior distribution of the state of the targets, using a recursive algorithm that takes information from noisy measurements. However, in many important cases the number of targets is also unknown, and has also to be estimated from data. The Probability Hypothesis Density (PHD) filter is an effective approach for this problem. The method uses a first-order moment approximation to develop a recursive algorithm for the optimal Bayesian filter. The PHD recursion can implemented in closed form in some restricted cases, and more generally using Sequential Monte Carlo (SMC) methods. The assumptions made in the PHD filter are appealing for computational reasons in real-time tracking implementations. These are only justifiable when the signal to noise ratio (SNR) of a single target is high enough that remediates the loss of information from the approximation. Although the original derivation of the PHD filter is based on functional expansions of belief-mass functions, it can also be developed by exploiting elementary constructions of Poisson processes. This thesis presents novel strategies for improving the Sequential Monte Carlo implementation of PHD filter using the point process approach. Firstly, we propose a post-processing state estimation step for the PHD filter, using Markov Chain Monte Carlo methods for mixture models. Secondly, we develop recursive Bayesian smoothing algorithms using the approximations of the filter backwards in time. The purpose of both strategies is to overcome the problems arising from the PHD filter assumptions. As a motivating example, we analyze the performance of the methods for the difficult problem of person tracking in crowded environments</p>


2021 ◽  
Author(s):  
◽  
Sergio I. Hernandez

<p>Tracking multiple objects is a challenging problem for an automated system, with applications in many domains. Typically the system must be able to represent the posterior distribution of the state of the targets, using a recursive algorithm that takes information from noisy measurements. However, in many important cases the number of targets is also unknown, and has also to be estimated from data. The Probability Hypothesis Density (PHD) filter is an effective approach for this problem. The method uses a first-order moment approximation to develop a recursive algorithm for the optimal Bayesian filter. The PHD recursion can implemented in closed form in some restricted cases, and more generally using Sequential Monte Carlo (SMC) methods. The assumptions made in the PHD filter are appealing for computational reasons in real-time tracking implementations. These are only justifiable when the signal to noise ratio (SNR) of a single target is high enough that remediates the loss of information from the approximation. Although the original derivation of the PHD filter is based on functional expansions of belief-mass functions, it can also be developed by exploiting elementary constructions of Poisson processes. This thesis presents novel strategies for improving the Sequential Monte Carlo implementation of PHD filter using the point process approach. Firstly, we propose a post-processing state estimation step for the PHD filter, using Markov Chain Monte Carlo methods for mixture models. Secondly, we develop recursive Bayesian smoothing algorithms using the approximations of the filter backwards in time. The purpose of both strategies is to overcome the problems arising from the PHD filter assumptions. As a motivating example, we analyze the performance of the methods for the difficult problem of person tracking in crowded environments</p>


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Zhanjiang Li ◽  
Lin Guo

As an important part of the national economy, small enterprises are now facing the problem of financing difficulties, so a scientific and reasonable credit rating method for small enterprises is very important. This paper proposes a credit rating model of small enterprises based on optimal discriminant ability; the credit score gap of small enterprises within the same credit rating is the smallest, and the credit score gap of small enterprises between different credit ratings is the largest, which is the dividing principle of credit rating of small enterprises based on the optimal discriminant ability. Based on this principle, a nonlinear optimization model for credit rating division of small enterprises is built, and the approximate solution of the model is solved by a recursive algorithm with strong reproducibility and clear structure. The small enterprise credit rating division not only satisfies the principle that the higher the credit grade, the lower the default loss rate, but also satisfies the principle that the credit group of small enterprises matches the credit grade, with credit data of 3111 small enterprises from a commercial bank for empirical analysis. The innovation of this study is the maximum ratio of the sum of the dispersions of credit scores between different credit ratings and the sum of the dispersions of credit scores within the same credit rating as the objective function, as well as the default loss rate of the next credit grade strictly larger than the default loss rate of the previous credit grade as the inequality constraint; a nonlinear credit rating optimal partition model is constructed. It ensures that the small enterprises with small credit score gap are of the same credit grade, while the small enterprises with large credit score gap are of different credit grades, overcoming the disadvantages of the existing research that only considers the small enterprises with large credit score gap and ignores the small enterprises with small credit score gap. The empirical results show that the credit rating of small enterprises in this study not only matches the reasonable default loss rate but also matches the credit status of small enterprises. The test and comparative analysis with the existing research based on customer number distribution, K-means clustering, and default pyramid division show that the credit rating model in this study is reasonable and the distribution of credit score interval is more stable.


Sign in / Sign up

Export Citation Format

Share Document