constant variance
Recently Published Documents


TOTAL DOCUMENTS

99
(FIVE YEARS 33)

H-INDEX

13
(FIVE YEARS 3)

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 447
Author(s):  
Hsuan-Yu Chen ◽  
Chiachung Chen

A calibration curve is used to express the relationship between the response of the measuring technique and the standard concentration of the target analyst. The calibration equation verifies the response of a chemical instrument to the known properties of materials and is established using regression analysis. An adequate calibration equation ensures the performance of these instruments. Most studies use linear and polynomial equations. This study uses data sets from previous studies. Four types of calibration equations are proposed: linear, higher-order polynomial, exponential rise to maximum and power equations. A constant variance test was performed to assess the suitability of calibration equations for this dataset. Suspected outliers in the data sets are verified. The standard error of the estimate errors, s, was used as criteria to determine the fitting performance. The Prediction Sum of Squares (PRESS) statistic is used to compare the prediction ability. Residual plots are used as quantitative criteria. Suspected outliers in the data sets are checked. The results of this study show that linear and higher order polynomial equations do not allow accurate calibration equations for many data sets. Nonlinear equations are suited to most of the data sets. Different forms of calibration equations are proposed. The logarithmic transformation of the response is used to stabilize non-constant variance in the response data. When outliers are removed, this calibration equation’s fit and prediction ability is significantly increased. The adequate calibration equations with the data sets obtained with the same equipment and laboratory indicated that the adequate calibration equations differed. No universe calibration equation could be found for these data sets. The method for this study can be used for other chemical instruments to establish an adequate calibration equation and ensure the best performance.


2021 ◽  
Author(s):  
Junguo Shi ◽  
Xuhua Hu ◽  
Shanshan Dou ◽  
David Alemzero ◽  
Elvis Elvis Alhassan

Abstract This study analyses the drivers that impact innovation on offshore wind energy for a select group of countries, applying the quantile and GMM approaches for a period between 2010-2019. The OLS results from the quantile analysis say the log of trademark, Carbon emissions, offshore wind capacity, and electricity from renewable energy are significant and impact on innovation regarding offshore wind energy. Generally, the Breusch-Pagan / Cook-Weisberg test for heteroskedasticity test reveals the variables have a constant variance, confirming the robustness of the findings. The quantile regression depicts that at 25th and 75th quantiles levels, the log of trademark, the log of trade flows, the log of scientific and technical journals quantile coefficients is significantly different from zero and exhibit varied effects on the explained variable patent.Similarly, the analysis applied the IV-GMM estimation in ivreg2 to identify the over restrictions, the Hansen J statistic, and give the robust moment of conditions analysis. The findings are consistent with prior analysis with the log of trademark, the log of offshore wind capacity, the log of carbon emissions, Scientific and technology journals, the log of patent, electricity from renewables to be significant and impact on innovation.The robustness was done on the GMM models, by applying the Huber-White-Sandwich estimator of the variance of the GMM linear models approximators. The ivreg2 robust analysis revealed that the estimates are efficient for homoskedasticity and Statistics robust to heteroskedasticity.Ultimately, the interaction term ‘’cross’’ came out significant in the analysis. Signifying the importance of the interaction variables in scaling innovation.This study will sever as a reference document for policy formulators regarding scaling up innovation for offshore wind energy.


Fishes ◽  
2021 ◽  
Vol 6 (3) ◽  
pp. 35
Author(s):  
Marcelo V. Curiel-Bernal ◽  
E. Alberto Aragón-Noriega ◽  
Miguel Á. Cisneros-Mata ◽  
Laura Sánchez-Velasco ◽  
S. Patricia A. Jiménez-Rosenberg ◽  
...  

Obtaining the best possible estimates of individual growth parameters is essential in studies of physiology, fisheries management, and conservation of natural resources since growth is a key component of population dynamics. In the present work, we use data of an endangered fish species to demonstrate the importance of selecting the right data error structure when fitting growth models in multimodel inference. The totoaba (Totoaba macdonaldi) is a fish species endemic to the Gulf of California increasingly studied in recent times due to a perceived threat of extinction. Previous works estimated individual growth using the von Bertalanffy model assuming a constant variance of length-at-age. Here, we reanalyze the same data under five different variance assumptions to fit the von Bertalanffy and Gompertz models. We found consistent significant differences between the constant and nonconstant error structure scenarios and provide an example of the consequences using the growth performance index ϕ′ to show how using the wrong error structure can produce growth parameter values that can lead to biased conclusions. Based on these results, for totoaba and other related species, we recommend using the observed error structure to obtain the individual growth parameters.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Rupel Nargunam ◽  
William W. S. Wei ◽  
N. Anuradha

AbstractThis study focuses on the Indian gold futures market where primary participants hold sentimental value for the underlying asset and are globally ranked number two in terms of the largest private holdings in the physical form. The trade of gold futures relates to seasons, festivity, and government policy. So, the paper will discuss seasonality and intervention in the analysis. Due to non-constant variance, we will also use the standard variance stabilization transformation method and the ARIMA/GARCH modelling method to compare the forecast performance on the gold futures prices. The results from the analysis show that while the standard variance transformation method may provide better point forecast values, the ARIMA/GARCH modelling method provides much shorter forecast intervals. The empirical results of this study which rationalise the effect of seasonality in the Indian bullion derivative market have not been reported in literature.


Author(s):  
Iryna Shcherbak ◽  
Yuliia Kovalova ◽  
Volodymyr Korobka

It is proposed on the electrical load graphs of transformer substations 10/0.4 kV in residential areas to allocate the stationarity areas for further modelling of load schedules and the implementation of controlling influences on the modes of consumers-regulators in order to align the overall graph of the electrical load. The relevance and complexity of the problem under consideration is caused by the fact that the load variation of transformer substations 10/0.4 kV in residential areas occurs randomly. This is due to the significant number, nomenclature and diversity of types of connected consumers, as well as the lack of deterministic connections between consumers of electricity, in addition, the random load function in the daily interval is non-stationary. In this regard, there was a need to develop the stages of selecting the areas of stationarity on the electrical load graphs of transformer substations 10/0.4 kV of residential areas. A measurement of the load graphs of 10/0,4 kV transformer substations is carried out, according to the results of which the distribution law of active and reactive power measurements is investigated. After confirming the hypothesis of normal distribution law, parametric tests are performed. Fisher's F-criterion is used to confirm the hypothesis of a constant variance, and Student's t-criterion is used to confirm the hypothesis of a constant mathematical expectation. The next stage, based on constancy of the variance and mathematical expectation, is the determination of autocorrelation coefficients of the studied random function and plotting of the autocorrelation function. To approximate the function the autocorrelation coefficients are determined by the least squares method and the autocorrelation function attenuation analysis is performed. The implementation of the defined stages allows to identify the areas of stationarity on the load graphs of transformer substations 10/0.4 kV. For a reliable description of the process of changing the load of transformer substations 10/0.4 kV the use of probabilistic-statistical method of modelling is justified that takes into account the stochastic nature of the load changes on the selected areas of stationarity.


2021 ◽  
Author(s):  
Daniel Bramich ◽  
Monica Menendez ◽  
Lukas Ambühl

<div>Understanding the inter-relationships between traffic flow, density, and speed through the study of the fundamental diagram of road traffic is critical for traffic modelling and management. Consequently, over the last 85 years, a wealth of models have been developed for its functional form. However, there has been no clear answer as to which model is the most appropriate for observed (i.e. empirical) fundamental diagrams and under which conditions. A lack of data has been partly to blame. Motivated by shortcomings in previous reviews, we first present a comprehensive literature review on modelling the functional form of empirical fundamental diagrams. We then perform fits of 50 previously proposed models to a high quality sample of 10,150 empirical fundamental diagrams pertaining to 25 cities. Comparing the fits using information criteria, we find that the non-parametric Sun model greatly outperforms all of the other models. The Sun model maintains its winning position regardless of road type and congestion level. Our study, the first of its kind when considering the number of models tested and the amount of data used, finally provides a definitive answer to the question ``Which model for the functional form of an empirical fundamental diagram is currently the best?''. The word ``currently'' in this question is key, because previously proposed models adopt an inappropriate Gaussian noise model with constant variance. We advocate that future research should shift focus to exploring more sophisticated noise models. This will lead to an improved understanding of empirical fundamental diagrams and their underlying functional forms.</div><br>


2021 ◽  
Author(s):  
Daniel Bramich ◽  
Monica Menendez ◽  
Lukas Ambühl

<div>Understanding the inter-relationships between traffic flow, density, and speed through the study of the fundamental diagram of road traffic is critical for traffic modelling and management. Consequently, over the last 85 years, a wealth of models have been developed for its functional form. However, there has been no clear answer as to which model is the most appropriate for observed (i.e. empirical) fundamental diagrams and under which conditions. A lack of data has been partly to blame. Motivated by shortcomings in previous reviews, we first present a comprehensive literature review on modelling the functional form of empirical fundamental diagrams. We then perform fits of 50 previously proposed models to a high quality sample of 10,150 empirical fundamental diagrams pertaining to 25 cities. Comparing the fits using information criteria, we find that the non-parametric Sun model greatly outperforms all of the other models. The Sun model maintains its winning position regardless of road type and congestion level. Our study, the first of its kind when considering the number of models tested and the amount of data used, finally provides a definitive answer to the question ``Which model for the functional form of an empirical fundamental diagram is currently the best?''. The word ``currently'' in this question is key, because previously proposed models adopt an inappropriate Gaussian noise model with constant variance. We advocate that future research should shift focus to exploring more sophisticated noise models. This will lead to an improved understanding of empirical fundamental diagrams and their underlying functional forms.</div><br>


2021 ◽  
Author(s):  
Shoko Claris ◽  
Chikobvu Delson

Abstract Background and Objective: The COVID-19 pandemic caused approximately 11,421,822 laboratory confirmed cases globally with 196,750 confirmed cases in South Africa by the 6th of July 2020. Coronavirus is transmitted from one person to another even before any symptoms appear, thus posing a severe threat to the society as a whole. This study is aimed at coming up with an ARIMA model to predict daily COVID-19 disease cases in South Africa using data from online sources. Materials and Methods: The study used online data on daily COVID-19 reported cases in South Africa (SA) recorded from 6 March 2020 to the 6th of July 2020. Time series analysis is used to investigate the trend in the daily COVID-19 disease cases leading to the Auto-Regressive Integrated Moving Average (ARIMA) model. Results: The time plot of the series suggests the need for differencing of the data up to the second-order to achieve a stationary time series. The best candidate model was an ARIMA(7,2,0). Residuals for the selected model are non-correlated and normally distributed with mean zero with a constant variance as expected in a good model. The fitted model predicted a continuous increase in the daily COVID-19 disease cases for the next 20 days ahead to day 143 with slight falls at a few time points.Conclusion: The results showed that ARIMA models can be applied to COVID-19 patterns in South Afriva. The model forecasted a continuous increase in the daily COVID-19 cases in South Africa. These results are important for public health planning in order combat the pandemic.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Muhammad Abu Shadeque Mullah ◽  
James A. Hanley ◽  
Andrea Benedetti

Abstract Background Generalized linear mixed models (GLMMs), typically used for analyzing correlated data, can also be used for smoothing by considering the knot coefficients from a regression spline as random effects. The resulting models are called semiparametric mixed models (SPMMs). Allowing the random knot coefficients to follow a normal distribution with mean zero and a constant variance is equivalent to using a penalized spline with a ridge regression type penalty. We introduce the least absolute shrinkage and selection operator (LASSO) type penalty in the SPMM setting by considering the coefficients at the knots to follow a Laplace double exponential distribution with mean zero. Methods We adopt a Bayesian approach and use the Markov Chain Monte Carlo (MCMC) algorithm for model fitting. Through simulations, we compare the performance of curve fitting in a SPMM using a LASSO type penalty to that of using ridge penalty for binary data. We apply the proposed method to obtain smooth curves from data on the relationship between the amount of pack years of smoking and the risk of developing chronic obstructive pulmonary disease (COPD). Results The LASSO penalty performs as well as ridge penalty for simple shapes of association and outperforms the ridge penalty when the shape of association is complex or linear. Conclusion We demonstrated that LASSO penalty captured complex dose-response association better than the Ridge penalty in a SPMM.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Christy A. Denckla ◽  
Sun Yeop Lee ◽  
Rockli Kim ◽  
Georgina Spies ◽  
Jennifer J. Vasterling ◽  
...  

AbstractThere are individual differences in health outcomes following exposure to childhood maltreatment, yet constant individual variance is often assumed in analyses. Among 286 Black, South African women, the association between childhood maltreatment and neurocognitive health, defined here as neurocognitive performance (NP), was first estimated assuming constant variance. Then, without assuming constant variance, we applied Goldstein’s method (Encyclopedia of statistics in behavioral science, Wiley, 2005) to model “complex level-1 variation” in NP as a function of childhood maltreatment. Mean performance in some tests of information processing speed (Digit-symbol, Stroop Word, and Stroop Color) lowered with increasing severity of childhood maltreatment, without evidence of significant individual variation. Conversely, we found significant individual variation by severity of childhood maltreatment in tests of information processing speed (Trail Making Test) and executive function (Color Trails 2 and Stroop Color-Word), in the absence of mean differences. Exploratory results suggest that the presence of individual-level heterogeneity in neurocognitive performance among women exposed to childhood maltreatment warrants further exploration. The methods presented here may be used in a person-centered framework to better understand vulnerability to the toxic neurocognitive effects of childhood maltreatment at the individual level, ultimately informing personalized prevention and treatment.


Sign in / Sign up

Export Citation Format

Share Document