On the use of experimental measured data to derive the linear regression usually adopted for determining the performance parameters of a solar cooker

2022 ◽  
Vol 181 ◽  
pp. 105-115
Author(s):  
Celestino Rodrigues Ruivo ◽  
Xabier Apaolaza-Pagoaga ◽  
Giovanni Di Nicola ◽  
Antonio Carrillo-Andrés
2011 ◽  
Vol 22 (No. 2) ◽  
pp. 58-66 ◽  
Author(s):  
m. Houška ◽  
k. Kýhos ◽  
p. Novotná ◽  
a. Landfeld ◽  
j. Strohalm

The study examined gel strength of the native egg white as a function of pH and the dry matter content. The egg white samples were isolated from fresh eggs and the eggs in different stages of storage. The gel strength was measured with Texture Analyser of TA-XT2i type. The study has shown that the gel strength increases with rising pH and the content of dry matter. The influence of the egg age is more complicated. The gel strength increases over the first 14 days after egg laying and slowly decreases afterwards. Mathematical dependence of the gel strength was predicted on the basis of the measured data by the method of non-linear regression: gel strength (p/cm<sup>2</sup>) = exp[0.00674*time (days) + 0.289*dry matter (%) + 0.1165*pH + 1.433].


2014 ◽  
Vol 117 (3) ◽  
pp. 231-238 ◽  
Author(s):  
Alan D. Moore ◽  
Meghan E. Downs ◽  
Stuart M. C. Lee ◽  
Alan H. Feiveson ◽  
Poul Knudsen ◽  
...  

This investigation was designed to measure aerobic capacity (V̇o2peak) during and after long-duration International Space Station (ISS) missions. Astronauts (9 males, 5 females: 49 ± 5 yr, 77.2 ± 15.1 kg, 40.6 ± 6.4 ml·kg−1·min−1 [mean ± SD]) performed peak cycle tests ∼90 days before flight, 15 days after launch, every ∼30 days in-flight, and on recovery days 1 (R + 1), R + 10, and R + 30. Expired metabolic gas fractions, ventilation, and heart rate (HR) were measured. Data were analyzed using mixed-model linear regression. The main findings of this study were that V̇o2peak decreased early in-flight (∼17%) then gradually increased during flight but never returned to preflight levels. V̇o2peak was lower on R + 1 and R + 10 than preflight but recovered by R + 30. Peak HR was not different from preflight at any time during or following flight. A sustained decrease in V̇o2peak during and/or early postflight was not a universal finding in this study, since seven astronauts were able to attain their preflight V̇o2peak levels either at some time during flight or on R + 1. Four of these astronauts performed in-flight exercise at higher intensities compared with those who experienced a decline in V̇o2peak, and three had low aerobic capacities before flight. These data indicate that, while V̇o2peak may be difficult to maintain during long-duration ISS missions, aerobic deconditioning is not an inevitable consequence of long-duration spaceflight.


2015 ◽  
Author(s):  
Nelson Fumo ◽  
Daniel C. Lackey ◽  
Sara McCaslin

Energy consumption from buildings is a major component of the overall energy consumption by end-use sectors in industrialized countries. In the United States of America (USA), the residential sector alone accounts for half of the combined residential and commercial energy consumption. Therefore, efforts toward energy consumption modeling based on statistical and engineering models are in continuous development. Statistical approaches need measured data but not buildings characteristics; engineering approaches need building characteristics but not data, at least when a calibrated model is the goal. Among the statistical models, the linear regression analysis has shown promising results because of its reasonable accuracy and relatively simple implementation when compared to other methods. In addition, when observed or measured data is available, statistical models are a good option to avoid the burden associated with engineering approaches. However, the dynamic behavior of buildings suggests that models accounting for dynamic effects may lead to more effective regression models, which is not possible with standard linear regression analysis. Utilizing lag variables is one method of autoregression that can model the dynamic behavior of energy consumption. The purpose of using lag variables is to account for the thermal energy stored/release from the mass of the building, which affects the response of HVAC equipment to changes in outdoor or weather parameters. In this study, energy consumption and outdoor temperature data from a research house are used to develop autoregressive models of energy consumption during the cooling season with lag variables to account for the dynamics of the house. Models with no lag variable, one lag variable, and two lag variables are compared. To investigate the effect of the time interval on the quality of the models, data intervals of 5 minutes, 15 minutes, and one hour are used to generate the models. The 5 minutes time interval is used because that is the resolution of the acquired data; the 15 minutes time interval is used because it is a common time interval in electric smart meters; and one hour time interval is used because it is the common time interval for energy simulation in buildings. The primary results shows that the use of lag variables greatly improves the accuracy of the models, but a time interval of 5 minutes is too small to avoid the dependence of the energy consumption on operating parameters. All mathematical models and their quality parameters are presented, along with supporting graphical representation as a visual aid to comparing models.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Dae-Hong Min ◽  
Hyung-Koo Yoon

AbstractDeterministic models have been widely applied in landslide risk assessment (LRA), but they have limitations in obtaining various geotechnical and hydraulic properties. The objective of this study is to suggest a new deterministic method based on machine learning (ML) algorithms. Eight crucial variables of LRA are selected with reference to expert opinions, and the output value is set to the safety factor derived by Mohr–Coulomb failure theory in infinite slope. Linear regression and a neural network based on ML are applied to find the best model between independent and dependent variables. To increase the reliability of linear regression and the neural network, the results of back propagation, including gradient descent, Levenberg–Marquardt (LM), and Bayesian regularization (BR) methods, are compared. An 1800-item dataset is constructed through measured data and artificial data by using a geostatistical technique, which can provide the information of an unknown area based on measured data. The results of linear regression and the neural network show that the special LM and BR back propagation methods demonstrate a high determination of coefficient. The important variables are also investigated though random forest (RF) to overcome the number of various input variables. Only four variables—shear strength, soil thickness, elastic modulus, and fine content—demonstrate a high reliability for LRA. The results show that it is possible to perform LRA with ML, and four variables are enough when it is difficult to obtain various variables.


1984 ◽  
Vol 24 (02) ◽  
pp. 215-223 ◽  
Author(s):  
T.M. Tao ◽  
A.T. Watson

Abstract We develop algorithms that can be used for computer implementation of the Johnson, Bossler, and Naumann (JBN) method for estimating relative permeabilities from displacement experiments. The developed algorithms differ in the method used to calculate derivatives of measured data. The Monte Carlo error analysis developed in Part 1 of this paper (Pages 209–14) is used to evaluate the performance of the computer algorithms. Introduction The JBN method and the Jones-Roszelle method, a modified version of the JBN method, are used to calculate relative permeability values explicitly from data collected during displacement experiments. Although the procedures are straightforward, these methods do require the calculation of procedures are straightforward, these methods do require the calculation of derivatives of measured pressure and production data. It is well known that, the effects of small measurement errors are amplified in the process of differentiating data. Procedures for implementing this process must be selected carefully to ensure accurate relative permeability estimates. In this paper we develop algorithms suitable for computer implementation of the JBN method. These algorithms differ in the method used to calculate derivatives of measured data. We use a Monte Carlo error analysis developed in Part 13 to evaluate the accuracy of relative permeability estimates obtained by use of the algorithms. This work represents the first published study of the accuracy of relative permeability estimates from displacement experiments, as well as the first published computer algorithm for explicit methods of calculating relative permeabilities. The developed algorithms can be readily extended to the Jones-Roszelle method. Calculation of Derivatives The JBN method is summarized in Appendix B in Part 1 of this paper. The critical step in the development of a computer algorithm based on the JBN method is the evaluation of the derivatives (d S g/dW) and [d(1/WI )/d(1/W)]-all other steps are straight forward. To estimate derivatives of data, one may represent each entire set of data, or a portion of the data, with a function, and then take the derivatives of that function. The functions may be constructed by interpolation or by smoothing. When interpolation is used, a function is chosen that represents the data values exactly. In the smoothing method, a smooth function that only approximates each data value is chosen. In this work, we considered five different algorithms for constructing functions from discrete data. For purposes of discussion, let f denote the quantity calculated from the measured data, t the independent variable, i a particular value of the independent variable, and denote f(ti) asf,. The f and i represent either S and W, respectively, or, and respectively. The algorithms investigated for constructing functions to be used to obtain the derivative off at, say, ti, include the following.Interpolate points, and using a quadratic polynomial. polynomial.Use least squares to fit a cubic polynomial to points points .Use linear regression analysis to choose a single functional representation of the entire range of data,Fit a spline with a fixed number and distribution of knots to the entire range of data.Choose the number and distribution of knots for a spline that is then fit to the entire range of data. In each case, analytic expressions are used to calculate the derivatives. The first two methods represent local algorithms, since only data in the vicinity of ti are used to construct the approximation to the function f; the latter three methods use all the data in constructing the approximating functions and hence are called global algorithms. The first method is the only interpolation method considered. It is well known that the use of polynomial interpolating functions for noisy (or nonexact) data can lead to large errors in approximation. and the error generally increases as the order of the interpolating polynomial increases. For equally spaced points, this method is equivalent to the use of a centered difference quotient and is certainly the most convenient of the five methods. The second method is probably the most convenient of the smoothing algorithms since the approximating cubic polynomial centered at ti can be calculated easily with linear least squares. The global algorithms are discussed in the next two sections. Linear Regression Algorithm. In the third method of derivative calculation considered, linear regression analysis is used to construct approximating functions for each entire data set. The analysis used is explained here. Further details on regression analysis may be found in Draper and Smith. JPT P. 215


Sports ◽  
2019 ◽  
Vol 7 (6) ◽  
pp. 139 ◽  
Author(s):  
Ashwin Phatak ◽  
Markus Gruber

Statistical analysis of real in-game situations plays an increasing role in talent identification and player recruitment across team sports. Recently, visual exploration frequency (VEF) in football has been discussed as being one of the important performance-determining parameters. However, until now, VEF has been studied almost exclusively in laboratory settings. Moreover, the VEF of individuals has not been correlated with performance parameters in a statistically significant number of top-level players. Thus, the objective of the present study was to examine the relationship between VEF and individual performance parameters in elite football midfielders. Thirty-five midfielders participating in the Euro 2016 championship were analyzed using game video. Their VEF was categorized into scans, transition scans, and total scans. Linear regression analysis was used to correlate the three different VEF parameters with the passing percentage and the turnover rate for individual players. The linear regression showed significant positive correlations between scan rate (p = 0.033, R 2 = 3.0%) and total scan rate (p = 0.015, R 2 = 4.0%) and passing percentage but not between transition scan rate and passing percentage (p = 0.074). There was a significant negative correlation between transition scan rate and turnover rate (p = 0.023, R 2 = 3.5%) but not between total scan rate (p = 0.857) or scan rate (p = 0.817) and turnover rate. In conclusion, the present study shows that players with a higher VEF may complete more passes and cause fewer turnovers. VEF explains up to 4% of variance in pass completion and turnover rate and thus should be considered as one of the factors that can help to evaluate players and identify talents as well as to tailor training interventions to the needs of midfielders up to the highest level of professional football.


1985 ◽  
Vol 30 (10) ◽  
pp. 824-824
Author(s):  
William L. Hays
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document