numerical function
Recently Published Documents


TOTAL DOCUMENTS

126
(FIVE YEARS 28)

H-INDEX

17
(FIVE YEARS 4)

Author(s):  
Toni Monleón-Getino ◽  

Survival analysis concerns the analysis of time-to-event data and it is essential to study in fields such as oncology, the survival function, S(t), calculation is usually used, but in the presence of competing risks (presence of competing events), is necessary introduce other statistical concepts and methods, as is the Cumulative incidence function CI(t). This is defined as the proportion of subjects with an event time less than or equal to. The present study describe a methodology that enables to obtain numerically a shape of CI(t) curves and estimate the benefit time points (BTP) as the time (t) when a 90, 95 or 99% is reached for the maximum value of CI(t). Once you get the numerical function of CI(t), it can be projected for an infinite time, with all the limitations that it entails. To do this task the R function Weibull.cumulative.incidence() is proposed. In a first step these function transforms the survival function (S(t)) obtained using the Kaplan–Meier method to CI(t). In a second step the best fit function of CI(t) is calculated in order to estimate BTP using two procedures, 1) Parametric function: estimates a Weibull growth curve of 4 parameters by means a non-linear regression (nls) procedure or 2) Non parametric method: using Local Polynomial Regression (LPR) or LOESS fitting. Two examples are presented and developed using Weibull.cumulative.incidence() function in order to present the method. The methodology presented will be useful for performing better tracking of the evolution of the diseases (especially in the case of the presence of competitive risks), project time to infinity and it is possible that this methodology can help identify the causes of current trends in diseases like cancer. We think that BTP points can be important in large diseases like cardiac illness or cancer to seek the inflection point of the disease, treatment associate or speculate how is the course of the disease and change the treatments at those points. These points can be important to take medical decisions furthermore.


Author(s):  
Mohammad Khajehzadeh ◽  
Alireza Sobhani ◽  
Seyed Mehdi Seyed Alizadeh ◽  
Mahdiyeh Eslami

This study introduces an effective hybrid optimization algorithm, namely Particle Swarm Sine Cosine Algorithm (PSSCA) for numerical function optimization and automating optimum design of retaining structures under seismic loads. The new algorithm employs the dynamic behavior of sine and cosine functions in the velocity updating operation of particle swarm optimization (PSO) to achieve faster convergence and better accuracy of final solution without getting trapped in local minima. The proposed algorithm is tested over a set of 16 benchmark functions and the results are compared with other well-known algorithms in the field of optimization. For seismic optimization of retaining structure, Mononobe-Okabe method is employed for dynamic loading condition and total construction cost of the structure is considered as the objective function. Finally, optimization of two retaining structures under static and seismic loading are considered from the literature. As results demonstrate, the PSSCA is superior and it could generate better optimal solutions compared with other competitive algorithms.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5055
Author(s):  
Zeyang Wen ◽  
Gongliu Yang ◽  
Qingzhong Cai

In the field of high accuracy strapdown inertial navigation system (SINS), the inertial measurement unit (IMU) biases can severely affect the navigation accuracy. Traditionally we use Kalman filter (KF) to estimate those biases. However, KF is an unbiased estimation method based on the assumption of Gaussian white noise (GWN) while IMU sensors noise is irregular. Kalman filtering will no longer be accurate when the sensor’s noise is irregular. In order to obtain the optimal solution of the IMU biases, this paper proposes a novel method for the calibration of IMU biases utilizing the KF-based AdaGrad algorithm to solve this problem. Three improvements were made as the following: (1) The adaptive subgradient method (AdaGrad) is proposed to overcome the difficulty of setting step size. (2) A KF-based AdaGrad numerical function is derived and (3) a KF-based AdaGrad calibration algorithm is proposed in this paper. Experimental results show that the method proposed in this paper can effectively improve the accuracy of IMU biases in both static tests and car-mounted field tests.


2021 ◽  
Vol 13 (14) ◽  
pp. 2829
Author(s):  
Carlos Cabezas-Rabadán ◽  
Josep E. Pardo-Pascual ◽  
Jesus Palomar-Vázquez

Sediment grain size is a fundamental parameter conditioning beach-face morphology and shoreline changes. From remote sensing data, an efficient definition of the shoreline position as the water–land interface may allow studying the geomorphological characteristics of the beaches. In this work, shoreline variability is defined by extracting a set of Satellite Derived Shorelines (SDS) covering about three and a half years. SDS are defined from Sentinel 2 imagery with high accuracy (about 3 m RMSE) using SHOREX. The variability is related to a large dataset of grain-size samples from the micro-tidal beaches at the Gulf of Valencia (Western Mediterranean). Both parameters present an inverse and non-linear relationship probably controlled by the beach-face slope. High shoreline variability appears associated with fine sands, followed by a rapid decrease (shifting point about medium/coarse sand) and subsequent small depletions as grain sizes increases. The relationship between both parameters is accurately described by a numerical function (R2 about 0.70) when considering samples at 137 open beaches. The definition of the variability is addressed employing different proxies, coastal segment lengths, and quantity of SDS under diverse oceanographic conditions, allowing to examine the effect they have on the relation with the sediment size. The relationship explored in this work improves the understanding of the mutual connection between sediment size, beach-face slope, and shoreline variability, and it may set up the basis for a rough estimation of sediment grain size from satellite optical imagery.


2021 ◽  
Vol 3 (4 (111)) ◽  
pp. 6-13
Author(s):  
Igor Kulyk ◽  
Olga Berezhna ◽  
Anatoliy Novhorodtsev ◽  
Maryna Shevchenko

The application of data compression methods is an effective means of improving the performance of information systems. At the same time, interest is aroused to the methods of compression without information loss which are distinguished by their versatility, low needs of costs during implementation, and the possibility of self-control. In this regard, the application of binomial numbering systems is promising. The numerical function of the binomial numbering system is used for compression. It makes it possible to put sequences in one-to-one compliance with their numbers. In this case, the transition from binary combinations to binomial numbers is used as an intermediate stage. During the study, theorems were formulated that indicate properties of compressing and restoring the mappings as well as the ways of their implementation. Models of compression processes were obtained on the basis of a numerical function, both for the case of compressible equilibrium combinations and the case when sequences of a general form are to be compressed. The compression models include coding steps based on binary binomials. The study results show the effectiveness of applying the compression based on the binomial numerical function. A 1.02 times increase in speed of information transmission through a communication channel was observed in the worst case and 18.29 times in the best case depending on the number of ones in 128-bit equilibrium combinations. The proposed methods are advantageous due to their high compression ratio (from 1.01 to 16 times for general 128-bit sequences) and versatility: combinations are compressed in which the number of ones is 75 % of their total variation range. The developed methods ensure control of errors during conversions. They are undemanding to computation resources and feature low implementation costs.


Author(s):  
Blaise Ravelo ◽  
Mathieu Guerin ◽  
Wenceslas Rahajandraibe ◽  
Valentin Gies ◽  
Lala Rajaoarisoa ◽  
...  
Keyword(s):  

Author(s):  
Elena Solovyeva ◽  
Ali Abdullah

Introduction: Due to its advantages, such as high flexibility and the ability to move heavy pieces with high torques and forces, the robotic arm, also named manipulator robot, is the most used industrial robot. Purpose: We improve the controlling quality of a manipulator robot with seven degrees of freedom in the V-REP program's environment using the reinforcement learning method based on deep neural networks. Methods: Estimate the action signal's policy by building a numerical algorithm using deep neural networks. The action-network sends the action's signal to the robotic manipulator, and the critic-network performs a numerical function approximation to calculate the value function (Q-value). Results: We create a model of the robot and the environment using the reinforcement-learning library in MATLAB and connecting the output signals (the action's signal) to a simulated robot in V-REP program. Train the robot to reach an object in its workspace after interacting with the environment and calculating the reward of such interaction. The model of the observations was done using three vision sensors. Based on the proposed deep learning method, a model of an agent representing the robotic manipulator was built using four layers neural network for the actor with four layers neural network for the critic. The agent's model representing the robotic manipulator was trained for several hours until the robot started to reach the object in its workspace in an acceptable way. The main advantage over supervised learning control is allowing our robot to perform actions and train at the same moment, giving the robot the ability to reach an object in its workspace in a continuous space action. Practical relevance: The results obtained are used to control the behavior of the movement of the manipulator without the need to construct kinematic models, which reduce the mathematical complexity of the calculation and provide a universal solution.


2020 ◽  
Vol 39 (3) ◽  
pp. 3275-3295
Author(s):  
Yin Tianhe ◽  
Mohammad Reza Mahmoudi ◽  
Sultan Noman Qasem ◽  
Bui Anh Tuan ◽  
Kim-Hung Pho

A lot of research has been directed to the new optimizers that can find a suboptimal solution for any optimization problem named as heuristic black-box optimizers. They can find the suboptimal solutions of an optimization problem much faster than the mathematical programming methods (if they find them at all). Particle swarm optimization (PSO) is an example of this type. In this paper, a new modified PSO has been proposed. The proposed PSO incorporates conditional learning behavior among birds into the PSO algorithm. Indeed, the particles, little by little, learn how they should behave in some similar conditions. The proposed method is named Conditionalized Particle Swarm Optimization (CoPSO). The problem space is first divided into a set of subspaces in CoPSO. In CoPSO, any particle inside a subspace will be inclined towards its best experienced location if the particles in its subspace have low diversity; otherwise, it will be inclined towards the global best location. The particles also learn to speed-up in the non-valuable subspaces and to speed-down in the valuable subspaces. The performance of CoPSO has been compared with the state-of-the-art methods on a set of standard benchmark functions.


2020 ◽  
Vol 10 (86) ◽  
Author(s):  
Nataliia Dovha ◽  
◽  
Hryhorii Tsehelyk ◽  

The processes of optimization of the production plan according to certain criteria were investigated. One of the problems is the difficulty of coordination and taking into account the impact of criteria on the optimal production plan. In practice, every company often faces tasks that require decisions that are quite complex and significantly affect the result. The choice of the best solutions is usually made by using a single numerical function - the criterion of optimality. The best solution is one that provides the maximum (or minimum) of the selected criterion. For the most part, the quality of decisions is characterized not by one but by many incomparable criteria. Therefore, it is necessary to make decisions based not on one but on many criteria. That is why the investigation and implementation of multicriteria models is an important stage in the development of modern science. The current rate of change in production is very high. To meet new needs and maintain the competitiveness each enterprise, firm, company must be able to make fast and correct decisions. Properly formed production program allows companies to meet the needs of consumers in products that are produced with the best use of resources, and get the maximum profit. Quite often there is a need to use mathematical methods to study this problem. The results obtained by solving a mathematical problem will make it possible to make optimal recommendations for certain actions. The main purpose of the company is usually to make a profit. One of the factors on which profit depends is the cost price. In view of this, an optimization model of the problem of increasing the cost price was proposed. The maximum price of manufactured products and the minimum costs for production was taken as criteria. At the same time it is impossible to ensure the maximum price and minimum production costs. Therefore, the solution was achieved by step-by-step solution of the proposed mathematical model of optimization of the production plan using the idea of the method of successive concessions, which would provide a certain price at low cost. An example shows an algorithm for solving this problem.


Sign in / Sign up

Export Citation Format

Share Document