rate function
Recently Published Documents


TOTAL DOCUMENTS

646
(FIVE YEARS 178)

H-INDEX

35
(FIVE YEARS 6)

2022 ◽  
Author(s):  
Sebastian Hoehna ◽  
Bjoern Tore Kopperud ◽  
Andrew F Magee

Diversification rates inferred from phylogenies are not identifiable. There are infinitely many combinations of speciation and extinction rate functions that have the exact same likelihood score for a given phylogeny, building a congruence class. The specific shape and characteristics of such congruence classes have not yet been studied. Whether speciation and extinction rate functions within a congruence class share common features is also not known. Instead of striving to make the diversification rates identifiable, we can embrace their inherent non-identifiable nature. We use two different approaches to explore a congruence class: (i) testing of specific alternative hypotheses, and (ii) randomly sampling alternative rate function within the congruence class. Our methods are implemented in the open-source R package ACDC (https://github.com/afmagee/ACDC). ACDC provides a flexible approach to explore the congruence class and provides summaries of rate functions within a congruence class. The summaries can highlight common trends, i.e. increasing, flat or decreasing rates. Although there are infinitely many equally likely diversification rate functions, these can share common features. ACDC can be used to assess if diversification rate patterns are robust despite non-identifiability. In our example, we clearly identify three phases of diversification rate changes that are common among all models in the congruence class. Thus, congruence classes are not necessarily a problem for studying historical patterns of biodiversity from phylogenies.


2022 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Victor Vargas

<p style='text-indent:20px;'>Consider <inline-formula><tex-math id="M2">\begin{document}$ \beta &gt; 1 $\end{document}</tex-math></inline-formula> and <inline-formula><tex-math id="M3">\begin{document}$ \lfloor \beta \rfloor $\end{document}</tex-math></inline-formula> its integer part. It is widely known that any real number <inline-formula><tex-math id="M4">\begin{document}$ \alpha \in \Bigl[0, \frac{\lfloor \beta \rfloor}{\beta - 1}\Bigr] $\end{document}</tex-math></inline-formula> can be represented in base <inline-formula><tex-math id="M5">\begin{document}$ \beta $\end{document}</tex-math></inline-formula> using a development in series of the form <inline-formula><tex-math id="M6">\begin{document}$ \alpha = \sum_{n = 1}^\infty x_n\beta^{-n} $\end{document}</tex-math></inline-formula>, where <inline-formula><tex-math id="M7">\begin{document}$ x = (x_n)_{n \geq 1} $\end{document}</tex-math></inline-formula> is a sequence taking values into the alphabet <inline-formula><tex-math id="M8">\begin{document}$ \{0,\; ...\; ,\; \lfloor \beta \rfloor\} $\end{document}</tex-math></inline-formula>. The so called <inline-formula><tex-math id="M9">\begin{document}$ \beta $\end{document}</tex-math></inline-formula>-shift, denoted by <inline-formula><tex-math id="M10">\begin{document}$ \Sigma_\beta $\end{document}</tex-math></inline-formula>, is given as the set of sequences such that all their iterates by the shift map are less than or equal to the quasi-greedy <inline-formula><tex-math id="M11">\begin{document}$ \beta $\end{document}</tex-math></inline-formula>-expansion of <inline-formula><tex-math id="M12">\begin{document}$ 1 $\end{document}</tex-math></inline-formula>. Fixing a Hölder continuous potential <inline-formula><tex-math id="M13">\begin{document}$ A $\end{document}</tex-math></inline-formula>, we show an explicit expression for the main eigenfunction of the Ruelle operator <inline-formula><tex-math id="M14">\begin{document}$ \psi_A $\end{document}</tex-math></inline-formula>, in order to obtain a natural extension to the bilateral <inline-formula><tex-math id="M15">\begin{document}$ \beta $\end{document}</tex-math></inline-formula>-shift of its corresponding Gibbs state <inline-formula><tex-math id="M16">\begin{document}$ \mu_A $\end{document}</tex-math></inline-formula>. Our main goal here is to prove a first level large deviations principle for the family <inline-formula><tex-math id="M17">\begin{document}$ (\mu_{tA})_{t&gt;1} $\end{document}</tex-math></inline-formula> with a rate function <inline-formula><tex-math id="M18">\begin{document}$ I $\end{document}</tex-math></inline-formula> attaining its maximum value on the union of the supports of all the maximizing measures of <inline-formula><tex-math id="M19">\begin{document}$ A $\end{document}</tex-math></inline-formula>. The above is proved through a technique using the representation of <inline-formula><tex-math id="M20">\begin{document}$ \Sigma_\beta $\end{document}</tex-math></inline-formula> and its bilateral extension <inline-formula><tex-math id="M21">\begin{document}$ \widehat{\Sigma_\beta} $\end{document}</tex-math></inline-formula> in terms of the quasi-greedy <inline-formula><tex-math id="M22">\begin{document}$ \beta $\end{document}</tex-math></inline-formula>-expansion of <inline-formula><tex-math id="M23">\begin{document}$ 1 $\end{document}</tex-math></inline-formula> and the so called involution kernel associated to the potential <inline-formula><tex-math id="M24">\begin{document}$ A $\end{document}</tex-math></inline-formula>.</p>


2021 ◽  
Vol 26 (4) ◽  
pp. 82
Author(s):  
Farrukh Jamal ◽  
Ali H. Abuzaid ◽  
Muhammad H. Tahir ◽  
Muhammad Arslan Nasir ◽  
Sadaf Khan ◽  
...  

In this article, Burr III distribution is proposed with a significantly improved functional form. This new modification has enhanced the flexibility of the classical distribution with the ability to model all shapes of hazard rate function including increasing, decreasing, bathtub, upside-down bathtub, and nearly constant. Some of its elementary properties, such as rth moments, sth incomplete moments, moment generating function, skewness, kurtosis, mode, ith order statistics, and stochastic ordering, are presented in a clear and concise manner. The well-established technique of maximum likelihood is employed to estimate model parameters. Middle-censoring is considered as a modern general scheme of censoring. The efficacy of the proposed model is asserted through three applications consisting of complete and censored samples.


Processes ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 2272
Author(s):  
Jose C. Merchuk ◽  
Francisco García-Camacho ◽  
Lorenzo López-Rosales

A novel mechanistic model of COVID-19 spread is presented. The pool of infected individuals is not homogeneously mixed but is viewed as a passage into which individuals enter upon the contagion, through which they pass (in the manner of “plug flow”) and exit at their recovery points within a fixed time. Our novel concept of infection unit is defined. The model separately considers various population pools: two of symptomatic and asymptomatic infected patients; three different pools of recovered individuals; of assisted hospitalized patients; of the quarantined; and of those who die from COVID-19. Transmission of this disease is described by an infection rate function, modulated by an encounter frequency function. This definition makes redundant the addition of a separate pool for the exposed, as done in several other models. Simulations are presented. The effects of social restrictions and of quarantine policies on pandemic spread are demonstrated. The model differs conceptually from others of the kind in the description of the transmission dynamics of the disease. A set of experimental data is used to calibrate our model, which predicts the dynamic behavior of each of the defined pools during pandemic spread.


Author(s):  
Umme Habibah Rahman ◽  
Tanusree Deb Roy

In this paper, a new kind of distribution has suggested with the concept of exponentiate. The reliability analysis including survival function, hazard rate function, reverse hazard rate function and mills ratio has been studied here. Its quantile function and order statistics are also included. Parameters of the distribution are estimated by the method of Maximum Likelihood estimation method along with Fisher information matrix and confidence intervals have also been given. The application has been discussed with the 30 years temperature data of Silchar city, Assam, India. The goodness of fit of the proposed distribution has been compared with Frechet distribution and as a result, for all 12 months, the proposed distribution fits better than the Frechet distribution.


Author(s):  
Nelson Doe Dzivor ◽  
Henry Otoo ◽  
Eric Neebo Wiah

The quest to improve on flexibility of probability distributions motivated this research. Four-parameter Janardan generalized distribution known as Kumaraswamy-Janardan distribution is proposed through method of parameterization and studied. The probability density function, cumulative density function, survival rate function as well as hazard rate function of the distribution are established. Statistical properties such as moments, moment generating function as well as maximum likelihood of the model are discussed. The parameters are estimated using the simulated annealing optimization algorithm.   Flexibility of the model in comparison with the baseline model as well as other competing sub-models is verified using Akaike Information Criteria (AIC). The model is tested with real data and is proven to be more flexible in fitting real data than any of its sub-models considered. 


Processes ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 2242
Author(s):  
Andreas Håkansson

The fragmentation rate function connects the fundamental drop breakup process with the resulting drop size distribution and is central to understanding or modeling emulsification processes. There is a large interest in being able to reliably measure it from an emulsification experiment, both for generating data for validating theoretical fragmentation rate function suggestions and as a tool for studying emulsification processes. Consequently, several methods have been suggested for measuring fragmentation rates based on emulsion experiments. Typically, each study suggests a new method that is rarely used again. The lack of an agreement on a standard method has become a substantial challenge. This contribution critically and systematically analyses four influential suggestions of how to measure fragmentation rate in terms of validity, reliability, and sensitivity to method assumptions. The back-calculation method is identified as the most promising—high reliability and low sensitivity to assumption—whereas performing a non-linear regression on a parameterized model (as commonly suggested) is unsuitable due to its high sensitivity. The simplistic zero-order method is identified as an interesting supplemental tool that could be used for qualitative comparisons but not for quantification.


2021 ◽  
Author(s):  
◽  
Sima Varnosafaderani

<p>Most engineered systems are inclined to fail sometime during their lifetime. Many of these systems are repairable and not necessarily discarded and replaced upon failure. Unlike replacements, where the failed system is replaced with a new and identical system, not all repairs have an equivalent effect on the working condition of the system. Describing the effect of repairs is a requirement in modeling consecutive failures of a repairable system–at the very least, it is assumed that a repair simply returns the failed system to an operational state without affecting its working condition (i.e. the repair is minimal). Although this assumption simplifies the modeling process, it is not the most accurate description of the effect of repair in real situations. Often, along with returning a failed system to an operational state, repairs can improve the working condition of the system, and thus, increase its reliability which impacts on the rate of future failures of the system.  Repair models provide a generalized framework for realistic modeling of consecutive failures of engineered systems, and have broad applications in fields such as system reliability and warranty cost analysis. The overall goal of this research is to advance the state of the art in modeling the effect of general repairs, and hence, failures of repairable systems. Two specific types of system are considered: (i) a system whose working condition initially improves with time or usage, and whose lifetime is modeled as a univariate random variable with a non-monotonic failure rate function; (ii) a system whose working condition deteriorates with age and usage, and whose lifetime is modeled as a bivariate random variable with an increasing failure rate function.  Most univariate lifetime distributions used to model system lifetimes are assumed to have increasing failure rate functions. In such cases, modeling the effect of general repairs is straightforward– the effect of a repair can bemodeled as a possible decrease, proportional to the effectiveness of the repair, in the conditional intensity function of the associated failure process. For instance, a general repair can be viewed as the replacement of the failed system with an identical system at a younger age, so that the conditional failure intensity following the repair is lower than the conditional failure intensity prior to the failure. When the failure rate function is initially decreasing, specifically bathtub-shaped, general repair models suggested for systems with increasing failure rate functions can only be applied when initial repairs are assumed to be minimal. In this study, we propose a new approach to modeling the effect of general repairs on systems with a bathtub-shaped failure rate function. The effect of a general repair is characterized as a modification in the conditional intensity function of the corresponding failure process, such that the system following a general repair is at least as reliable as a system that has not failed. We discuss applications of the results in the context of warranty cost analysis and provide numerical illustrations to demonstrate properties of the models.  Sometimes the failures of a system may be attributed to changes in more than one measure of its working condition– for instance, the age and some measure of the usage of the system (such as, mileage). Then, the system lifetime is modeled as a bivariate random variable. Most general repair models for systems with bivariate lifetime distributions involve reducing the failure process to a one-dimensional process by, for instance, assuming a relationship between age and usage or by defining a composite scale. Then, univariate repair models are used to describe the effect of repairs. In this study, we propose a new approach to model the effect of general repairs performed on a system whose lifetime is modeled as a bivariate random variable, where the distributions of the bivariate inter-failure lifetimes depend on the effect of all previous repairs and following a general repair, the system is at least as reliable as a system that has not failed. The lifetime of the original system is assumed to have an increasing failure rate (specifically, hazard gradient vector) function. We discuss applications of the associated failure process in the context of two-dimensional warranty cost analysis and provide simulation studies to illustrate the results.  This study is primarily theoretical, with most of the results being analytic. However, at times, due to the intractability of some of the mathematical expressions, simulation studies are used to illustrate the properties and applications of the proposed models and results.</p>


2021 ◽  
Author(s):  
◽  
Sima Varnosafaderani

<p>Most engineered systems are inclined to fail sometime during their lifetime. Many of these systems are repairable and not necessarily discarded and replaced upon failure. Unlike replacements, where the failed system is replaced with a new and identical system, not all repairs have an equivalent effect on the working condition of the system. Describing the effect of repairs is a requirement in modeling consecutive failures of a repairable system–at the very least, it is assumed that a repair simply returns the failed system to an operational state without affecting its working condition (i.e. the repair is minimal). Although this assumption simplifies the modeling process, it is not the most accurate description of the effect of repair in real situations. Often, along with returning a failed system to an operational state, repairs can improve the working condition of the system, and thus, increase its reliability which impacts on the rate of future failures of the system.  Repair models provide a generalized framework for realistic modeling of consecutive failures of engineered systems, and have broad applications in fields such as system reliability and warranty cost analysis. The overall goal of this research is to advance the state of the art in modeling the effect of general repairs, and hence, failures of repairable systems. Two specific types of system are considered: (i) a system whose working condition initially improves with time or usage, and whose lifetime is modeled as a univariate random variable with a non-monotonic failure rate function; (ii) a system whose working condition deteriorates with age and usage, and whose lifetime is modeled as a bivariate random variable with an increasing failure rate function.  Most univariate lifetime distributions used to model system lifetimes are assumed to have increasing failure rate functions. In such cases, modeling the effect of general repairs is straightforward– the effect of a repair can bemodeled as a possible decrease, proportional to the effectiveness of the repair, in the conditional intensity function of the associated failure process. For instance, a general repair can be viewed as the replacement of the failed system with an identical system at a younger age, so that the conditional failure intensity following the repair is lower than the conditional failure intensity prior to the failure. When the failure rate function is initially decreasing, specifically bathtub-shaped, general repair models suggested for systems with increasing failure rate functions can only be applied when initial repairs are assumed to be minimal. In this study, we propose a new approach to modeling the effect of general repairs on systems with a bathtub-shaped failure rate function. The effect of a general repair is characterized as a modification in the conditional intensity function of the corresponding failure process, such that the system following a general repair is at least as reliable as a system that has not failed. We discuss applications of the results in the context of warranty cost analysis and provide numerical illustrations to demonstrate properties of the models.  Sometimes the failures of a system may be attributed to changes in more than one measure of its working condition– for instance, the age and some measure of the usage of the system (such as, mileage). Then, the system lifetime is modeled as a bivariate random variable. Most general repair models for systems with bivariate lifetime distributions involve reducing the failure process to a one-dimensional process by, for instance, assuming a relationship between age and usage or by defining a composite scale. Then, univariate repair models are used to describe the effect of repairs. In this study, we propose a new approach to model the effect of general repairs performed on a system whose lifetime is modeled as a bivariate random variable, where the distributions of the bivariate inter-failure lifetimes depend on the effect of all previous repairs and following a general repair, the system is at least as reliable as a system that has not failed. The lifetime of the original system is assumed to have an increasing failure rate (specifically, hazard gradient vector) function. We discuss applications of the associated failure process in the context of two-dimensional warranty cost analysis and provide simulation studies to illustrate the results.  This study is primarily theoretical, with most of the results being analytic. However, at times, due to the intractability of some of the mathematical expressions, simulation studies are used to illustrate the properties and applications of the proposed models and results.</p>


Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3113
Author(s):  
Muhammed Rasheed Irshad ◽  
Christophe Chesneau ◽  
Soman Latha Nitin ◽  
Damodaran Santhamani Shibu ◽  
Radhakumari Maya

Many studies have underlined the importance of the log-normal distribution in the modeling of phenomena occurring in biology. With this in mind, in this article we offer a new and motivated transformed version of the log-normal distribution, primarily for use with biological data. The hazard rate function, quantile function, and several other significant aspects of the new distribution are investigated. In particular, we show that the hazard rate function has increasing, decreasing, bathtub, and upside-down bathtub shapes. The maximum likelihood and Bayesian techniques are both used to estimate unknown parameters. Based on the proposed distribution, we also present a parametric regression model and a Bayesian regression approach. As an assessment of the longstanding performance, simulation studies based on maximum likelihood and Bayesian techniques of estimation procedures are also conducted. Two real datasets are used to demonstrate the applicability of the new distribution. The efficiency of the third parameter in the new model is tested by utilizing the likelihood ratio test. Furthermore, the parametric bootstrap approach is used to determine the effectiveness of the suggested model for the datasets.


Sign in / Sign up

Export Citation Format

Share Document