scholarly journals Bayesian Derivative Order Estimation for a Fractional Logistic Model

Mathematics ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 109
Author(s):  
Francisco J. Ariza-Hernandez ◽  
Martin P. Arciga-Alejandre ◽  
Jorge Sanchez-Ortiz ◽  
Alberto Fleitas-Imbert

In this paper, we consider the inverse problem of derivative order estimation in a fractional logistic model. In order to solve the direct problem, we use the Grünwald-Letnikov fractional derivative, then the inverse problem is tackled within a Bayesian perspective. To construct the likelihood function, we propose an explicit numerical scheme based on the truncated series of the derivative definition. By MCMC samples of the marginal posterior distributions, we estimate the order of the derivative and the growth rate parameter in the dynamic model, as well as the noise in the observations. To evaluate the methodology, a simulation was performed using synthetic data, where the bias and mean square error are calculated, the results give evidence of the effectiveness for the method and the suitable performance of the proposed model. Moreover, an example with real data is presented as evidence of the relevance of using a fractional model.

2020 ◽  
Vol 39 (5) ◽  
pp. 6891-6901
Author(s):  
Godrick Oketch ◽  
Filiz Karaman

Count data models are based on definite counts of events as dependent variables. But there are practical situations in which these counts may fail to be specific and are seen as imprecise. In this paper, an assumption that heaped data points are fuzzy is used as a way of identifying counts that are not definite since heaping can result from imprecisely reported counts. Because it is practically unlikely to report all counts in an entire dataset as imprecise, this paper proposes a likelihood function that not only considers both precise and imprecisely reported counts but also incorporates α - cuts of fuzzy numbers with the aim of varying impreciseness of fuzzy reported counts. The proposed model is then illustrated through a smoking cessation study data that attempts to identify factors associated with the number of cigarettes smoked in a month. Through the real data illustration and a simulation study, it is shown that the proposed model performs better in predicting the outcome counts especially when the imprecision of the fuzzy points in a dataset are increased. The results also show that inclusion of α - cuts makes it possible to identify better models, a feature that was not previously possible.


Geosciences ◽  
2018 ◽  
Vol 8 (12) ◽  
pp. 497
Author(s):  
Fedor Krasnov ◽  
Alexander Butorin

Sparse spikes deconvolution is one of the oldest inverse problems, which is a stylized version of recovery in seismic imaging. The goal of sparse spike deconvolution is to recover an approximation of a given noisy measurement T = W ∗ r + W 0 . Since the convolution destroys many low and high frequencies, this requires some prior information to regularize the inverse problem. In this paper, the authors continue to study the problem of searching for positions and amplitudes of the reflection coefficients of the medium (SP&ARCM). In previous research, the authors proposed a practical algorithm for solving the inverse problem of obtaining geological information from the seismic trace, which was named A 0 . In the current paper, the authors improved the method of the A 0 algorithm and applied it to the real (non-synthetic) data. Firstly, the authors considered the matrix approach and Differential Evolution approach to the SP&ARCM problem and showed that their efficiency is limited in the case. Secondly, the authors showed that the course to improve the A 0 lays in the direction of optimization with sequential regularization. The authors presented calculations for the accuracy of the A 0 for that case and experimental results of the convergence. The authors also considered different initialization parameters of the optimization process from the point of the acceleration of the convergence. Finally, the authors carried out successful approbation of the algorithm A 0 on synthetic and real data. Further practical development of the algorithm A 0 will be aimed at increasing the robustness of its operation, as well as in application in more complex models of real seismic data. The practical value of the research is to increase the resolving power of the wave field by reducing the contribution of interference, which gives new information for seismic-geological modeling.


Psych ◽  
2020 ◽  
Vol 2 (4) ◽  
pp. 269-278
Author(s):  
Michela Battauz

The four-parameter logistic model is an Item Response Theory model for dichotomous items that limit the probability of giving a positive response to an item into a restricted range, so that even people at the extremes of a latent trait do not have a probability close to zero or one. Despite the literature acknowledging the usefulness of this model in certain contexts, the difficulty of estimating the item parameters has limited its use in practice. In this paper we propose a regularized estimation approach for the estimation of the item parameters based on the inclusion of a penalty term in the log-likelihood function. Simulation studies show the good performance of the proposal, which is further illustrated through an application to a real-data set.


Author(s):  
Olga Mikhaylovna Tikhonova ◽  
Alexander Fedorovich Rezchikov ◽  
Vladimir Andreevich Ivashchenko ◽  
Vadim Alekseevich Kushnikov

The paper presents the system of predicting the indicators of accreditation of technical universities based on J. Forrester mechanism of system dynamics. According to analysis of cause-and-effect relationships between selected variables of the system (indicators of accreditation of the university) there was built the oriented graph. The complex of mathematical models developed to control the quality of training engineers in Russian higher educational institutions is based on this graph. The article presents an algorithm for constructing a model using one of the simulated variables as an example. The model is a system of non-linear differential equations, the modelling characteristics of the educational process being determined according to the solution of this system. The proposed algorithm for calculating these indicators is based on the system dynamics model and the regression model. The mathematical model is constructed on the basis of the model of system dynamics, which is further tested for compliance with real data using the regression model. The regression model is built on the available statistical data accumulated during the period of the university's work. The proposed approach is aimed at solving complex problems of managing the educational process in universities. The structure of the proposed model repeats the structure of cause-effect relationships in the system, and also provides the person responsible for managing quality control with the ability to quickly and adequately assess the performance of the system.


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


2019 ◽  
Vol XVI (2) ◽  
pp. 1-11
Author(s):  
Farrukh Jamal ◽  
Hesham Mohammed Reyad ◽  
Soha Othman Ahmed ◽  
Muhammad Akbar Ali Shah ◽  
Emrah Altun

A new three-parameter continuous model called the exponentiated half-logistic Lomax distribution is introduced in this paper. Basic mathematical properties for the proposed model were investigated which include raw and incomplete moments, skewness, kurtosis, generating functions, Rényi entropy, Lorenz, Bonferroni and Zenga curves, probability weighted moment, stress strength model, order statistics, and record statistics. The model parameters were estimated by using the maximum likelihood criterion and the behaviours of these estimates were examined by conducting a simulation study. The applicability of the new model is illustrated by applying it on a real data set.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
João Lobo ◽  
Rui Henriques ◽  
Sara C. Madeira

Abstract Background Three-way data started to gain popularity due to their increasing capacity to describe inherently multivariate and temporal events, such as biological responses, social interactions along time, urban dynamics, or complex geophysical phenomena. Triclustering, subspace clustering of three-way data, enables the discovery of patterns corresponding to data subspaces (triclusters) with values correlated across the three dimensions (observations $$\times$$ × features $$\times$$ × contexts). With increasing number of algorithms being proposed, effectively comparing them with state-of-the-art algorithms is paramount. These comparisons are usually performed using real data, without a known ground-truth, thus limiting the assessments. In this context, we propose a synthetic data generator, G-Tric, allowing the creation of synthetic datasets with configurable properties and the possibility to plant triclusters. The generator is prepared to create datasets resembling real 3-way data from biomedical and social data domains, with the additional advantage of further providing the ground truth (triclustering solution) as output. Results G-Tric can replicate real-world datasets and create new ones that match researchers needs across several properties, including data type (numeric or symbolic), dimensions, and background distribution. Users can tune the patterns and structure that characterize the planted triclusters (subspaces) and how they interact (overlapping). Data quality can also be controlled, by defining the amount of missing, noise or errors. Furthermore, a benchmark of datasets resembling real data is made available, together with the corresponding triclustering solutions (planted triclusters) and generating parameters. Conclusions Triclustering evaluation using G-Tric provides the possibility to combine both intrinsic and extrinsic metrics to compare solutions that produce more reliable analyses. A set of predefined datasets, mimicking widely used three-way data and exploring crucial properties was generated and made available, highlighting G-Tric’s potential to advance triclustering state-of-the-art by easing the process of evaluating the quality of new triclustering approaches.


2021 ◽  
Vol 40 (3) ◽  
pp. 1-12
Author(s):  
Hao Zhang ◽  
Yuxiao Zhou ◽  
Yifei Tian ◽  
Jun-Hai Yong ◽  
Feng Xu

Reconstructing hand-object interactions is a challenging task due to strong occlusions and complex motions. This article proposes a real-time system that uses a single depth stream to simultaneously reconstruct hand poses, object shape, and rigid/non-rigid motions. To achieve this, we first train a joint learning network to segment the hand and object in a depth image, and to predict the 3D keypoints of the hand. With most layers shared by the two tasks, computation cost is saved for the real-time performance. A hybrid dataset is constructed here to train the network with real data (to learn real-world distributions) and synthetic data (to cover variations of objects, motions, and viewpoints). Next, the depth of the two targets and the keypoints are used in a uniform optimization to reconstruct the interacting motions. Benefitting from a novel tangential contact constraint, the system not only solves the remaining ambiguities but also keeps the real-time performance. Experiments show that our system handles different hand and object shapes, various interactive motions, and moving cameras.


2021 ◽  
Vol 11 (9) ◽  
pp. 3863
Author(s):  
Ali Emre Öztürk ◽  
Ergun Erçelebi

A large amount of training image data is required for solving image classification problems using deep learning (DL) networks. In this study, we aimed to train DL networks with synthetic images generated by using a game engine and determine the effects of the networks on performance when solving real-image classification problems. The study presents the results of using corner detection and nearest three-point selection (CDNTS) layers to classify bird and rotary-wing unmanned aerial vehicle (RW-UAV) images, provides a comprehensive comparison of two different experimental setups, and emphasizes the significant improvements in the performance in deep learning-based networks due to the inclusion of a CDNTS layer. Experiment 1 corresponds to training the commonly used deep learning-based networks with synthetic data and an image classification test on real data. Experiment 2 corresponds to training the CDNTS layer and commonly used deep learning-based networks with synthetic data and an image classification test on real data. In experiment 1, the best area under the curve (AUC) value for the image classification test accuracy was measured as 72%. In experiment 2, using the CDNTS layer, the AUC value for the image classification test accuracy was measured as 88.9%. A total of 432 different combinations of trainings were investigated in the experimental setups. The experiments were trained with various DL networks using four different optimizers by considering all combinations of batch size, learning rate, and dropout hyperparameters. The test accuracy AUC values for networks in experiment 1 ranged from 55% to 74%, whereas the test accuracy AUC values in experiment 2 networks with a CDNTS layer ranged from 76% to 89.9%. It was observed that the CDNTS layer has considerable effects on the image classification accuracy performance of deep learning-based networks. AUC, F-score, and test accuracy measures were used to validate the success of the networks.


Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 599
Author(s):  
Danilo Cruz ◽  
João de Araújo ◽  
Carlos da Costa ◽  
Carlos da Silva

Full waveform inversion is an advantageous technique for obtaining high-resolution subsurface information. In the petroleum industry, mainly in reservoir characterisation, it is common to use information from wells as previous information to decrease the ambiguity of the obtained results. For this, we propose adding a relative entropy term to the formalism of the full waveform inversion. In this context, entropy will be just a nomenclature for regularisation and will have the role of helping the converge to the global minimum. The application of entropy in inverse problems usually involves formulating the problem, so that it is possible to use statistical concepts. To avoid this step, we propose a deterministic application to the full waveform inversion. We will discuss some aspects of relative entropy and show three different ways of using them to add prior information through entropy in the inverse problem. We use a dynamic weighting scheme to add prior information through entropy. The idea is that the prior information can help to find the path of the global minimum at the beginning of the inversion process. In all cases, the prior information can be incorporated very quickly into the full waveform inversion and lead the inversion to the desired solution. When we include the logarithmic weighting that constitutes entropy to the inverse problem, we will suppress the low-intensity ripples and sharpen the point events. Thus, the addition of entropy relative to full waveform inversion can provide a result with better resolution. In regions where salt is present in the BP 2004 model, we obtained a significant improvement by adding prior information through the relative entropy for synthetic data. We will show that the prior information added through entropy in full-waveform inversion formalism will prove to be a way to avoid local minimums.


Sign in / Sign up

Export Citation Format

Share Document