Asian Journal of Probability and Statistics
Latest Publications


TOTAL DOCUMENTS

315
(FIVE YEARS 284)

H-INDEX

3
(FIVE YEARS 2)

Published By Sciencedomain International

2582-0230

Author(s):  
Atanu, Enebi Yahaya ◽  
Ette, Harrison Etuk ◽  
Amos, Emeka

This study compares the performance of Autoregressive Integrated Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroskedasticity models in forecasting Crude Oil Price data as obtained from (CBN 2019) Statistical Bulletin.  The forecasting of Crude Oil Price, plays an important role in decision making for the Nigeria government and all other sectors of her economy. Crude Oil Prices are volatile time series data, as they have huge price swings in a shortage or an oversupply period. In this study, we use two time series models which are Box-Jenkins Autoregressive Integrated Moving Average (ARIMA) and Generalized Autoregressive Conditional Heterocedasticity (GARCH) models in modelling and forecasting Crude Oil Prices. The statistical analysis was performed by the use of time plot to display the trend of the data, Autocorrelation Function (ACF), Partial Autocorrelation Functions (PACF), Dickey-Fuller test for stationarity, forecasting was done based on the best fit models for both ARIMA and GARCH models. Our result shows that ARIMA (3, 1, 2) is the best ARIMA model to forecast monthly Crude Oil Price and we also found GARCH (1, 1) model is the best GARCH model and using a specified set of parameters, GARCH (1, 1) model is the best fit for our concerned data set.


Author(s):  
A. Audu ◽  
A. Danbaba ◽  
S. K. Ahmad ◽  
N. Musa ◽  
A. Shehu ◽  
...  

Human-assisted surveys, such as medical and social science surveys, are frequently plagued by non-response or missing observations. Several authors have devised different imputation algorithms to account for missing observations during analyses. Nonetheless, several of these imputation schemes' estimators are based on known population meanof auxiliary variable. In this paper, a new class of almost unbiased imputation method that uses  as an estimate of is suggested. Using the Taylor series expansion technique, the MSE of the class of estimators presented was derived up to first order approximation. Conditions were also specified for which the new estimators were more efficient than the other estimators studied in the study. The results of numerical examples through simulations revealed that the suggested class of estimators is more efficient.


Author(s):  
U. Mishra ◽  
J. R. Singh

In the present article, effect of measurement error on the power function of control charts for mean with control limits is considered based on non-normal population. The non-normality is represented by the first four terms of an Edge-worth series. Tabular and visual comparison is also provided for the better comprehension of the significance of measurement error on power function under non-normality.


Author(s):  
Umme Habibah Rahman ◽  
Tanusree Deb Roy

In this paper, a new kind of distribution has suggested with the concept of exponentiate. The reliability analysis including survival function, hazard rate function, reverse hazard rate function and mills ratio has been studied here. Its quantile function and order statistics are also included. Parameters of the distribution are estimated by the method of Maximum Likelihood estimation method along with Fisher information matrix and confidence intervals have also been given. The application has been discussed with the 30 years temperature data of Silchar city, Assam, India. The goodness of fit of the proposed distribution has been compared with Frechet distribution and as a result, for all 12 months, the proposed distribution fits better than the Frechet distribution.


Author(s):  
Mbanefo S. Madukaife

This paper compares the empirical power performances of eight tests for multivariate normality classified under Baringhaus-Henze-Epps-Pulley (BHEP) class of tests. The tests are compared under eight different alternative distributions. The result shows that the eight statistics have good control over type-I-error. Also, some tests are more sensitive to distributional differences with respect to their power performances than others. Also, some tests are generally more powerful than others. The generally most powerful ones are therefore recommended.


Author(s):  
Kelachi P. Enwere ◽  
Uchenna P. Ogoke

Aims: The Study seeks to determine the relationship that exists among Continuous Probability Distributions and the use of Interpolation Techniques to estimate unavailable but desired value of a given probability distribution. Study Design: Statistical Probability tables for Normal, Student t, Chi-squared, F and Gamma distributions were used to compare interpolated values with statistical tabulated values. Charts and Tables were used to represent the relationships among the five probability distributions. Methodology: Linear Interpolation Technique was employed to interpolate unavailable but desired values so as to obtain approximate values from the statistical tables. The data were analyzed for interpolation of unavailable but desired values at 95% a-level from the five continuous probability distribution. Results: Interpolated values are as close as possible to the exact values and the difference between the exact value and the interpolated value is not pronounced. The table and chart established showed that relationships do exist among the Normal, Student-t, Chi-squared, F and Gamma distributions. Conclusion: Interpolation techniques can be applied to obtain unavailable but desired information in a data set. Thus, uncertainty found in a data set can be discovered, then analyzed and interpreted to produce desired results. However, understanding of how these probability distributions are related to each other can inform how best these distributions can be used interchangeably by Statisticians and other Researchers who apply statistical methods employed in practical applications.


Author(s):  
R. M. Refaey ◽  
G. R. AL-Dayian ◽  
A. A. EL-Helbawy ◽  
A. A. EL-Helbawy

In this paper, bivariate compound exponentiated survival function of the Lomax distribution is constructed based on the technique considered by AL-Hussaini (2011). Some properties of the distribution are derived. Maximum likelihood estimation and prediction of the future observations are considered. Also, Bayesian estimation and prediction are studied under squared error loss function. The performance of the proposed bivariate distribution is examined using a simulation study. Finally, a real data set is analyzed under the proposed distribution to illustrate its flexibility for real-life application.


Author(s):  
Ibrahim Adamu ◽  
Chukwudi Justin Ogbonna ◽  
Yunusa Adamu ◽  
Yahaya Zakari

Corona virus Disease, a disease which was discovered in December, 2019 has been spreading worldwide like wildfire. In view of this, there is need of continuous findings on the impact, consequence and possible medications of the pandemic in Nigeria and the world at large. Therefore, this research is aimed at Analyzing the spread of Coronavirus pandemic in Nigeria, using univariate and multivariate models namely;(ARIMA) and (ARIMAX). The daily data used in this research was obtained from the NCDC official website dated from 19th April, 2020 to 20th April, 2021 with total of 384 observations using R and Eview10 software for the analysis. Three different variables were examined. The variables are; total confirmed, discharged and death cases for the purpose of establishing reliable forecast, for better decision making and a helping technique for drastic action in reducing the day to day spread of the pandemic. Summary statistics and stationary test were checked with the data being stationary at the first difference and design technique was conducted as well. Also, best fitted model was selected using Akaike Information Criteria (AIC). The ARIMA (1,1,3) model with an exogenous variable was chosen from the ARIMA models with minimum AIC. From the model, a prediction of sixty-days forecast showed the upward trend of the total confirmed cases of the pandemic in the country. The government on its part via its task force can use the predicted line to take much necessary measures and emphases on taking COVID-19 vaccines so as to prevent further spread of the virus


Author(s):  
Anthony Joe Turkson ◽  
Timothy Simpson ◽  
John Awuah Addor

A recurrent event remains the outcome variable of interest in many biometric studies. Recurrent events can be explained as events of defined interest that can occur to same person more than once during the study period. This study presents an overview of different pertinent recurrent models for analyzing recurrent events. Aims: To introduce, compare, evaluate and discuss pros and cons of four models in analyzing recurrent events, so as to validate previous findings in respect of the superiority or appropriateness of these models. Study Design:  A comparative studies based on simulation of recurrent event models applied to a tertiary data on cancer studies.  Methodology: Codes in R were implemented for simulating four recurrent event models, namely; The Andersen and Gill model; Prentice, Williams and Peterson models; Wei, Lin and Weissferd; and Cox frailty model. Finally, these models were applied to analyze the first forty subjects from a study of Bladder Cancer Tumors. The data set contained the first four repetitions of the tumor for each patient, and each recurrence time was recorded from the entry time of the patient into the study. An isolated risk interval is defined by each time to an event or censoring. Results: The choice and usage of any of the models lead to different conclusions, but the choice depends on: risk intervals; baseline hazard; risk set; and correlation adjustment or simplistically, type of data and research question. The PWP-GT model could be used if the research question is focused on whether treatment was effective for the  event since the previous event happened. However, if the research question is designed to find out whether treatment was effective for the  event since the start of treatment, then we could use the PWP- TT. The AG model will be adequate if a common baseline hazard could be assumed, but the model lacks the details and versatility of the event-specific models. The WLW model is very suitable for data with diverse events for the same person, which underscores a potentially different baseline hazard for each type. Conclusion: PWP-GT has proven to be the most useful model for analyzing recurrent event data.


Author(s):  
Raphael Ayan Adeleke ◽  
Ibrahim Ismaila Itopa ◽  
Sule Omeiza Bashiru

To curb the spread of contagious diseases and the recent polio outbreak in Nigeria, health departments must set up and operate clinics to dispense medications or vaccines. Residents arrive according to an external (not necessarily Poisson) Arrival process to the clinic. When a resident arrives, he goes to the first workstation, based on his or her information, the resident moves from one workstation to another in the clinic. The queuing network is decomposed by estimating the performance of each workstation using a combination of exact and approximate models. A key contribution of this research is to introduce approximations for workstations with batch arrivals and multiple parallel servers, for workstations with batch service processes and multiple parallel servers, and for self service workstations. We validated the models for likely scenarios using data collected from one of the states vaccination clinics in the country during the vaccination exercises.


Sign in / Sign up

Export Citation Format

Share Document