measured variable
Recently Published Documents


TOTAL DOCUMENTS

148
(FIVE YEARS 45)

H-INDEX

15
(FIVE YEARS 3)

2021 ◽  
Vol 14 (1) ◽  
pp. 170
Author(s):  
Francisco Rodríguez-Puerta ◽  
Esteban Gómez-García ◽  
Saray Martín-García ◽  
Fernando Pérez-Rodríguez ◽  
Eva Prada

The installation of research or permanent plots is a very common task in growth and forest yield research. At young ages, tree height is the most commonly measured variable, so the location of individuals is necessary when repeated measures are taken and if spatial analysis is required. Identifying the coordinates of individual trees and re-measuring the height of all trees is difficult and particularly costly (in time and money). The data used comes from three Pinus pinaster Ait. and three Pinus radiata D. Don plantations of 0.8 ha, with an age ranging between 2 and 5 years and mean heights between 1 and 5 m. Five individual tree detection (ITD) methods are evaluated, based on the Canopy Height Model (CHM), where the height of each tree is identified, and its crown is segmented. Three CHM resolutions are used for each method. All algorithms used for individual tree detection (ITD) tend to underestimate the number of trees. The best results are obtained with the R package, ForestTools and rLiDAR. The best CHM resolution for identifying trees was always 10 cm. We did not detect any differences in the relative error (RE) between Pinus pinaster and Pinus radiata. We found a pattern in the ITD depending on the height of the trees to be detected: the accuracy is lower when detecting trees less than 1 m high than when detecting larger trees (RE close to 12% versus 1% for taller trees). Regarding the estimation of tree height, we can conclude that the use of the CHM to estimate height tends to underestimate its value, while the use of the point cloud presents practically unbiased results. The stakeout of forestry research plots and the re-measurement of individual tree heights is an operation that can be performed by UAV-based LiDAR scanning sensors. The individual geolocation of each tree and the measurement of heights versus pole and/or hypsometer measurement is highly accurate and cost-effective, especially when tree height reaches 1–1.5 m.


Author(s):  
Rowan Mott ◽  
Thomas Prowse ◽  
Micha Jackson ◽  
Daniel Rogers ◽  
Jody O'Connor ◽  
...  

Quantifying habitat quality is dependent on measuring a site’s relative contribution to population growth rate. This is challenging for studies of waterbirds, whose high mobility can decouple demographic rates from local habitat conditions and make sustained monitoring of individuals near-impossible. To overcome these challenges, biologists have used many direct and indirect proxies of waterbird habitat quality. However, consensus on what methods are most appropriate for a given scenario is lacking. We undertook a structured literature review of the methods used to quantify waterbird habitat quality, and provide a synthesis of the context-dependent strengths and limitations of those methods. Our structured search of the Web of Science database returned a sample of 398 studies, upon which our review was based. The reviewed studies assessed habitat quality by either measuring habitat attributes (e.g., food abundance, water quality, vegetation structure), or measuring attributes of the waterbirds themselves (e.g., demographic parameters, body condition, behaviour, distribution). Measuring habitat attributes, although they are only indirectly related to demographic rates, has the advantage of being unaffected by waterbird behavioural stochasticity. Conversely, waterbird-derived measures (e.g., body condition, peck rates) may be more directly related to demographic rates than habitat variables, but may be subject to greater stochastic variation (e.g., behavioural change due to presence of conspecifics). Therefore, caution is needed to ensure that the measured variable does influence waterbird demographic rates. This assumption was usually based on ecological theory rather than empirical evidence. Our review highlighted that there is no single best, universally applicable method to quantify waterbird habitat quality. Individual project specifics (e.g., time frame, spatial scale, funding) will influence the choice of variables measured. Where possible, practitioners should measure variables most directly related to demographic rates. Generally, measuring multiple variables yields a better chance of accurately capturing the relationship between habitat characteristics and demographic rates.


10.5219/1672 ◽  
2021 ◽  
Vol 15 ◽  
pp. 961-969
Author(s):  
Fery Lusviana Widiany ◽  
Mochammad Sja'bani ◽  
Susetyowati ◽  
Emy Huriyati

This study aims to determine the organoleptic quality of liquid food formula made from snail (Pila ampullacea), tempeh, and moringa (Moringa oleifera) leaves. The study was conducted in Yogyakarta, Indonesia. It involved 25 moderately trained panelists and also 5 trained panelists, who met the criteria. The measured variable was the organoleptic quality, which included aspects of color, texture, taste, and aroma. The formula tested was a powder formula made from the snail, tempeh, and moringa leaves. The proportion of snail flour, tempeh flour, and moringa leaves flour was 50:30:20. It was found that more than 50% of panelists liked the liquid food formula based on color, texture, taste, and aroma. Independent t-test to determine the difference between the organoleptic quality of the two groups showed p = 0.710 for color, p = 0.335 for texture, p = 0.603 for taste, and p = 0.880 for aroma. In conclusion, most of the panelists liked the liquid food formula products made from snail, tempeh, and moringa leaves based on the organoleptic quality result. There was no difference between the organoleptic quality studies of the two groups in the aspect of color, texture, taste, and aroma.


2021 ◽  
Vol 10 (18) ◽  
pp. 4150
Author(s):  
Mark E. Fenton ◽  
Sarah A. Wade ◽  
Bibi N. Pirrili ◽  
Zsolt J. Balogh ◽  
Christopher W. Rowe ◽  
...  

Multidisciplinary team (MDT) meetings are the mainstay of the decision-making process for patients presenting with complex clinical problems such as papillary thyroid carcinoma (PTC). Adherence to guidelines by MDTs has been extensively investigated; however, scarce evidence exists on MDT performance and variability where guidelines are less prescriptive. We evaluated the consistency of MDT management recommendations for T1 and T2 PTC patients and explored key variables that may influence therapeutic decision making. A retrospective review of the prospective database of all T1 and T2 PTC patients discussed by the MDT was conducted between January 2016 and May 2021. Univariate analysis (with Bonferroni correction significance calculated at p < 0.006) was performed to establish clinical variables linked to completion thyroidectomy and Radioactive iodine (RAI) recommendations. Of 468 patients presented at thyroid MDT, 144 pT1 PTC and 118 pT2 PTC met the selection criteria. Only 18% (n = 12) of pT1 PTC patients initially managed with hemithyroidectomy were recommended completion thyroidectomy. Mean tumour diameter was the only variable differing between groups (p = 0.003). pT2 patients were recommended completion thyroidectomy in 66% (n = 16) of instances. No measured variable explained the difference in recommendation. pT1 patients initially managed with total thyroidectomy were not recommended RAI in 71% (n = 55) of cases with T1a status (p = 0.001) and diameter (p = 0.001) as statistically different variables. For pT2 patients, 60% (n = 41) were recommended RAI post-total thyroidectomy, with no differences observed among groups. The majority of MDT recommendations were concordant for patients with similar measurable characteristics. Discordant recommendations for a small group of patients were not explained by measured variables and may have been accounted for by individual patient factors. Further research into the MDT decision-making process is warranted.


Author(s):  
David Lomeling ◽  
Salah Joseph Huria

Historical rainfall data from 1997-2016 of Juba County, South Sudan were used in a Feed-Forward Neural Network (FFNN) model to make future predictions. Annual rainfall data were aggregated into three seasons MAMJ, JAS and OND and later trained for best forecasts for the period 2017-2034 using the Alyuda Forecaster XL software. Best training of the time series was attained, once the minimum error of the weight expressed as MSE or AE between the measured variable and predicted was achieved during gradient descent.  The results showed that for MAMJ and JAS months, the number forecasts were over 85% whereas this was between 60-80% for OND months. The Seasonal Kendal (SK) test on future rainfall forecasts as well as the Theil-Sen slope showed negative monotonic trends in the mean values till the end of 2034 of all three seasons with MAMJ, JAS at OND at 100, 150 and 80 mm respectively.  Rainfall forecast showed that the MAMJ months for the years 2019 to 2027 will be moderately wet except in April 2021 which will experience some severe wetness (due to intensive rainfall). Interdecadal severe drought with less than 60, 100 and 10 mm for MAMJ, JAS and OND respectively, is expected between 2028 to 2033 after almost two decades. The declining onset of MAMJ rains is expected to significantly affect the timing for land preparation and crop planting. 


2021 ◽  
Author(s):  
Tom Mooney ◽  
Kelda Bratley ◽  
Amin Amin ◽  
Timothy Jadot

Abstract The use of conventional process simulators is commonplace for system design and is growing in use for online monitoring and optimization applications. While these simulators are extremely useful, additional value can be extracted by combining simulator predictions with field inputs from measurement devices such as flowmeters, pressure and temperature sensors. The statistical nature of inputs (e.g., measurement uncertainty) are typically not considered in the forward calculations performed by the simulators and so may lead to erroneous results if the actual raw measurement is in error or biased. A complementary modeling methodology is proposed to identify and correct measurement and process errors as an integral part of a robust simulation practice. The studied approach ensures best quality data for direct use in the process models and simulators for operations and process surveillance. From a design perspective, this approach also makes it possible to evaluate the impact of uncertainty of measured and unmeasured variables on CAPEX spend and optimize instrument / meter design. In this work, an extended statistical approach to process simulation is examined using Data Validation and Reconciliation, (DVR). The DVR methodology is compared to conventional non-statistical, deterministic process simulators. A key difference is that DVR uses any measured variable (inlet, outlet, or in between measurements), including its uncertainty, in the modelled process as an input, where only inlet measurement values are used by traditional simulators to estimate the values of all other measured and unmeasured variables. A walk through the DVR calculations and applications is done using several comparative case studies of a typical surface process facility. Examples are the simulation of commingled multistage oil and gas separation process, the validation of separators flowmeters and fluids samples, and the quantification of unmeasured variables along with their uncertainties. The studies demonstrate the added value from using redundancy from all available measurements in a process model based on the DVR method. Single points and data streaming field cases highlight the dependency and complementing roles of traditional simulators, and data validation provided by the DVR methodology; it is shown how robust measurement management strategies can be developed based on DVR's effective surveillance capabilities. Moreover, the cases demonstrate how DVR-based capex and opex improvements are derived from effective hardware selection using cost versus measurement precision trade-offs, soft measurements substitutes, and from condition-based maintenance strategies.


Author(s):  
Yousri Elghoul ◽  
Fatma Bahri ◽  
Khaled Trabelsi ◽  
Hamdi Chtourou ◽  
Mohamed Frikha ◽  
...  

Improving the acquisition and retention of a new motor skill is of great importance. The present study (i) investigated the effects of difficulty manipulation strategies (gradual difficulty), combined with different modalities of feedback (FB) frequency on performance accuracy and consistency when learning a novel fine motor coordination task, and (ii) examined relationships between novel fine motor task performance and executive function (EF), working memory (WM), and perceived difficulty (PD). Thirty-six, right-handed, novice physical education students volunteered to participate in this study. Participants were divided into three progressive difficulty groups (PDG), 100% visual FB (FB1), 50% FB (FB2), and 33% FB (FB3). Progressive difficulty was increased by the manipulation of the distance to the target; 2 m, 2.37 m, and 3.56 m. Three FB modalities were investigated (i.e.: 100% visual FB (100% FB), 50% reduced feedback condition (50% RFB), and 33% reduced feedback conditions (33% RFB)). Performance assessments were conducted following familiarization, acquisition, and retention learning phases. Two stress-conditions of dart throws were investigated (i.e.: free condition (FC) and time pressure condition (TPC)). After the learning intervention, data showed that, under the free condition, the 100% FB group had a significant improvement in accuracy during all learning phases. Under time pressure condition, for the 50% RFB and the 33% RFB group, the measured variable (accuracy and consistency) showed a significant linear improvement in performance. The association between the percentage of RFB frequencies and the task difficulty (50% group) may be a more appropriate and manageable cognitive load compared to the 33% RFB and the 100% FB group. The present findings could have practical implications for practitioners because, while strategies are clearly necessary for improving learning, the efficacy of the process appears to be essentially based on the characteristics of the learners.


2021 ◽  
Vol 23 (2) ◽  
pp. 95-103
Author(s):  
Modawy Abdelgader Albasheer ◽  
Ning Iriyanti ◽  
Ismoyowati Ismoyowati ◽  
Efka Aris Rimbawanto

This study was aimed to evaluate the use of safflower oil (Carthamus tinctorius L) and inositol on the digestive profile of male Sentul chickens. A total of 182 Sentul chickens aged 17 weeks were reared up to 23 weeks in 91 units of battery cage (6 chickens/unit). The research was conducted in a Completely Randomized Design (CRD) with nine treatments and three replicates (6 chickens/replicate). The research treatments were R0 = control/ basal feed + 0% Safflower and 0% Inositol; R1 = Basal feed + 0.5% Safflower oil; R2 = Basal feed + 1.0% Safflower oil; R3 = Basal feed + 0.5% Inositol; R4 = Basal feed + 1.0% Inositol; R5 = Basal feed + 0.5% Safflower oil and 0.5% Inositol; R6 = Basal feed + 0.5% Safflower oil and 1.0% Inositol; R7 = Basal feed + 1.0% Safflower oil and 0.5% Inositol; R8 = Basal feed + 1.0% Safflower oil and 1.0% Inositol.  Basal feed was composed of corn, rice bran, soybean kernel, fishmeal, palm oil, calcium carbonate (CaCO3), topmix, lysine, and methionine, as well as safflower oil (Carthamus tinctorius L) and inositol. The measured variable consists of digestive profile (the percentage of intestine weight, digesta, proventriculus, gizzard) and intestine length, crypt depth, the width and length of intestinal villi, Intestinal histology profile. The data were subjected to a statistical analysis of variance (ANOVA) continued with an Honestly Significant Difference test (HSD). The result showed that incorporating safflower oil (Carthamus tinctorius L) and inositol into feed did not significantly affect (P>0.05) the digestive profile of Male Sentul chickens. Conclusively, safflower oil and inositol up to 1% in the feed are safe for male Sentul chickens without interfering with the performance of digestive organs because it produces relatively similar intestinal weight and length, crypt depth, as well as the length and width of intestinal villi.


2021 ◽  
Vol 6 ◽  
Author(s):  
Nancy L. Staus ◽  
Kari O'Connell ◽  
Martin Storksdieck

The aim of this paper is to describe an analytical approach for addressing the ceiling effect, a measurement limitation that affects research and evaluation efforts in informal STEM learning projects. The ceiling effect occurs when a large proportion of subjects begin a study with very high scores on the measured variable(s), such that participation in an educational experience cannot yield significant gains among these learners. This effect is widespread in informal science learning due to the self-selective nature of participation in these experiences, such that participants are already interested in and knowledgeable about the content area. When the ceiling effect is present, no conclusions can be drawn regarding the influence of an intervention on participants’ learning outcomes which could lead evaluators and funders to underestimate the positive effects of STEM programs. We discuss how the use of person-centered analytic approaches that segment samples in theory driven ways could help address the ceiling effect and provide an illustrative example using data from a recent evaluation of a STEM afterschool program.


PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0252966
Author(s):  
Kathryn E. Keenan ◽  
Zydrunas Gimbutas ◽  
Andrew Dienstfrey ◽  
Karl F. Stupic ◽  
Michael A. Boss ◽  
...  

Recent innovations in quantitative magnetic resonance imaging (MRI) measurement methods have led to improvements in accuracy, repeatability, and acquisition speed, and have prompted renewed interest to reevaluate the medical value of quantitative T1. The purpose of this study was to determine the bias and reproducibility of T1 measurements in a variety of MRI systems with an eye toward assessing the feasibility of applying diagnostic threshold T1 measurement across multiple clinical sites. We used the International Society of Magnetic Resonance in Medicine/National Institute of Standards and Technology (ISMRM/NIST) system phantom to assess variations of T1 measurements, using a slow, reference standard inversion recovery sequence and a rapid, commonly-available variable flip angle sequence, across MRI systems at 1.5 tesla (T) (two vendors, with number of MRI systems n = 9) and 3 T (three vendors, n = 18). We compared the T1 measurements from inversion recovery and variable flip angle scans to ISMRM/NIST phantom reference values using Analysis of Variance (ANOVA) to test for statistical differences between T1 measurements grouped according to MRI scanner manufacturers and/or static field strengths. The inversion recovery method had minor over- and under-estimations compared to the NMR-measured T1 values at both 1.5 T and 3 T. Variable flip angle measurements had substantially greater deviations from the NMR-measured T1 values than the inversion recovery measurements. At 3 T, the measured variable flip angle T1 for one vendor is significantly different than the other two vendors for most of the samples throughout the clinically relevant range of T1. There was no consistent pattern of discrepancy between vendors. We suggest establishing rigorous quality control procedures for validating quantitative MRI methods to promote confidence and stability in associated measurement techniques and to enable translation of diagnostic threshold from the research center to the entire clinical community.


Sign in / Sign up

Export Citation Format

Share Document