sampling errors
Recently Published Documents


TOTAL DOCUMENTS

715
(FIVE YEARS 143)

H-INDEX

49
(FIVE YEARS 5)

Author(s):  
Sanjeet Pandey

Abstract: Brain is recognized as one of the complex organ of the human body. Abnormal formation of cells may affect the normal functioning of the brain. These abnormal cells may belong to category of benign cells resulting in low grade glioma or malignant cells resulting in high grade glioma. The treatment plans vary according to grade of glioma detected. This results in need of precise glioma grading. As per World Health Organization, biopsy is considered to be gold standard in glioma grading. Biopsy is an invasive procedure which may contains sampling errors. Biopsy may also contain subjectivity errors. This motivated the clinician to look for other methods which may overcome the limitations of biopsy reports. Machine learning and deep learning approaches using MRI is considered to be most promising alternative approach reported by scientist in literature. The presented work were based on the concept of AdaBoost approach which is an ensemble learning approach. The developed model was optimized w.r.t to two hyper parameters i.e. no. of estimators and learning rate keeping the base model fixed. The decision tree was used as a base model. The proposed developed model was trained and validated on BraTS 2018 dataset. The developed optimized model achieves reasonable accuracy in carrying out classification task i.e. high grade glioma vs. low grade glioma. Keywords: High grade glioma, low grade glioma, AdaBoost, Texture Features,


Author(s):  
Arunselvan Ramaswamy ◽  
Shalabh Bhatnagar

In this paper, we consider the stochastic iterative counterpart of the value iteration scheme wherein only noisy and possibly biased approximations of the Bellman operator are available. We call this counterpart the approximate value iteration (AVI) scheme. Neural networks are often used as function approximators, in order to counter Bellman’s curse of dimensionality. In this paper, they are used to approximate the Bellman operator. Because neural networks are typically trained using sample data, errors and biases may be introduced. The design of AVI accounts for implementations with biased approximations of the Bellman operator and sampling errors. We present verifiable sufficient conditions under which AVI is stable (almost surely bounded) and converges to a fixed point of the approximate Bellman operator. To ensure the stability of AVI, we present three different yet related sets of sufficient conditions that are based on the existence of an appropriate Lyapunov function. These Lyapunov function–based conditions are easily verifiable and new to the literature. The verifiability is enhanced by the fact that a recipe for the construction of the necessary Lyapunov function is also provided. We also show that the stability analysis of AVI can be readily extended to the general case of set-valued stochastic approximations. Finally, we show that AVI can also be used in more general circumstances, that is, for finding fixed points of contractive set-valued maps.


2021 ◽  
Vol 4 ◽  
pp. 77
Author(s):  
Noirin O' Herlihy ◽  
Sarah Griffin ◽  
Robert Gaffney ◽  
Patrick Henn ◽  
Ali S Khashan ◽  
...  

Background: Blood sampling errors including ‘wrong blood in tube’ (WBIT) may have adverse effects on clinical outcomes. WBIT errors occur when the blood sample in the tube is not that of the patient identified on the label. This study aims to determine the effect of proficiency-based progression (PBP) training in phlebotomy on the rate of blood sampling errors (including WBIT). Methods: A non-randomised controlled trial compared the blood sampling error rate of 43 historical controls who had not undergone PBP training in 2016 to 44 PBP trained interventional groups in 2017. In 2018, the PBP training programme was implemented and the blood sampling error rate of 46 interns was compared to the 43 historical controls in 2016. Data analysis was performed using logistic regression analysis adjusting for sample timing. Results: In 2016, 43 interns had a total blood sample error rate of 2.4%, compared to 44 interns in 2017, who had error rate of 1.2% (adjusted OR=0.50, 95% CI 0.36-0.70; <0.01). In 2018, 46 interns had an error rate of 1.9% (adjusted OR=0.89, 95% CI 0.65-1.21; p=0.46) when compared to the 2016 historical controls. There were three WBITs in 2016, three WBITs in 2017 and five WBITs in 2018.  Conclusions: The study demonstrates that PBP training in phlebotomy has the potential to reduce blood sampling errors. Trial registration number: NCT03577561


2021 ◽  
Vol 14 (12) ◽  
pp. 7681-7691
Author(s):  
Karlie N. Rees ◽  
Timothy J. Garrett

Abstract. Due to the discretized nature of rain, the measurement of a continuous precipitation rate by disdrometers is subject to statistical sampling errors. Here, Monte Carlo simulations are employed to obtain the precision of rain detection and rate as a function of disdrometer collection area and compared with World Meteorological Organization guidelines for a 1 min sample interval and 95 % probability. To meet these requirements, simulations suggest that measurements of light rain with rain rates R ≤ 0.50 mm h−1 require a collection area of at least 6 cm × 6 cm, and for R = 1 mm h−1, the minimum collection area is 13 cm × 13 cm. For R = 0.01 mm h−1, a collection area of 2 cm × 2 cm is sufficient to detect a single drop. Simulations are compared with field measurements using a new hotplate device, the Differential Emissivity Imaging Disdrometer. The field results suggest an even larger plate may be required to meet the stated accuracy, likely in part due to non-Poissonian hydrometeor clustering.


2021 ◽  
Vol 18 (1) ◽  
Author(s):  
Sonja Hartnack ◽  
Malgorzata Roos

Abstract Background One of the emerging themes in epidemiology is the use of interval estimates. Currently, three interval estimates for confidence (CI), prediction (PI), and tolerance (TI) are at a researcher's disposal and are accessible within the open access framework in R. These three types of statistical intervals serve different purposes. Confidence intervals are designed to describe a parameter with some uncertainty due to sampling errors. Prediction intervals aim to predict future observation(s), including some uncertainty present in the actual and future samples. Tolerance intervals are constructed to capture a specified proportion of a population with a defined confidence. It is well known that interval estimates support a greater knowledge gain than point estimates. Thus, a good understanding and the use of CI, PI, and TI underlie good statistical practice. While CIs are taught in introductory statistical classes, PIs and TIs are less familiar. Results In this paper, we provide a concise tutorial on two-sided CI, PI and TI for binary variables. This hands-on tutorial is based on our teaching materials. It contains an overview of the meaning and applicability from both a classical and a Bayesian perspective. Based on a worked-out example from veterinary medicine, we provide guidance and code that can be directly applied in R. Conclusions This tutorial can be used by others for teaching, either in a class or for self-instruction of students and senior researchers.


2021 ◽  
Vol 11 (23) ◽  
pp. 11262
Author(s):  
Chun-Min Yu ◽  
Chih-Feng Wu ◽  
Kuen-Suan Chen ◽  
Chang-Hsien Hsu

Many studies have pointed out that the-smaller-the-better quality characteristics (QC) can be found in many important components of machine tools, such as roundness, verticality, and surface roughness of axes, bearings, and gears. This paper applied a process quality index that is capable of measuring the level of process quality. Meanwhile, a model of fuzzy quality evaluation was developed by the process quality index as having a one-to-one mathematical relationship with the process yield. In addition to assessing the level of process quality, the model can also be employed as a basis for determining whether to improve the process quality at the same time. This model can cope with the problem of small sample sizes arising from the need for enterprises’ quick response, which means that the accuracy of the evaluation can still be maintained in the case of small sample sizes. Moreover, this fuzzy quality evaluation model is built on the confidence interval, enabling a decline in the probability of misjudgment incurred by sampling errors.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Sheng Zeng ◽  
Guohua Geng ◽  
Hongjuan Gao ◽  
Mingquan Zhou

AbstractGeometry images parameterise a mesh with a square domain and store the information in a single chart. A one-to-one correspondence between the 2D plane and the 3D model is convenient for processing 3D models. However, the parameterised vertices are not all located at the intersection of the gridlines the existing geometry images. Thus, errors are unavoidable when a 3D mesh is reconstructed from the chart. In this paper, we propose parameterise surface onto a novel geometry image that preserves the constraint of topological neighbourhood information at integer coordinate points on a 2D grid and ensures that the shape of the reconstructed 3D mesh does not change from supplemented image data. We find a collection of edges that opens the mesh into simply connected surface with a single boundary. The point distribution with approximate blue noise spectral characteristics is computed by capacity-constrained delaunay triangulation without retriangulation. We move the vertices to the constrained mesh intersection, adjust the degenerate triangles on a regular grid, and fill the blank part by performing a local affine transformation between each triangle in the mesh and image. Unlike other geometry images, the proposed method results in no error in the reconstructed surface model when floating-point data are stored in the image. High reconstruction accuracy is achieved when the xyz positions are in a 16-bit data format in each image channel because only rounding errors exist in the topology-preserving geometry images, there are no sampling errors. This method performs one-to-one mapping between the 3D surface mesh and the points in the 2D image, while foldovers do not appear in the 2D triangular mesh, maintaining the topological structure. This also shows the potential of using a 2D image processing algorithm to process 3D models.


Land ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1194
Author(s):  
Efraín Velasco-Bautista ◽  
Martin Enrique Romero-Sanchez ◽  
David Meza-Juárez ◽  
Ramiro Pérez-Miranda

In the assessment of natural resources, such as forests or grasslands, it is common to apply a two-stage cluster sampling design, the application of which in the field determines the following situations: (a) difficulty in locating secondary sampling units (SSUs) precisely as planned, so that a random pattern of SSUs can be identified; and (b) the possibility that some primary sampling units (PSUs) have fewer SSUs than planned, leading to PSUs of different sizes. In addition, when considering the estimated variance of the various potential estimators for two-stage cluster sampling, the part corresponding to the variation between SSUs tends to be small for large populations, so the estimator’s variance may depend only on the divergence between PSUs. Research on these aspects is incipient in grassland assessment, so this study generated an artificial population of 759 PSUs and examined the effect of six estimation methods, using 15 PSU sample sizes, on unbiased and relative sampling errors when estimating aboveground, belowground, and total biomass of halophytic grassland. The results indicated that methods 1, 2, 4, and 5 achieved unbiased biomass estimates regardless of sample size, while methods 3 and 6 led to slightly biased estimates. Methods 4 and 5 had relative sampling errors of less than 5% with a sample size of 140 when estimating total biomass.


2021 ◽  
Vol 893 (1) ◽  
pp. 012020
Author(s):  
Nicolas A Da Silva ◽  
Benjamin G M Webber ◽  
Adrian J Matthews ◽  
Matthew M Feist ◽  
Thorwald H M Stein ◽  
...  

Abstract Extreme precipitation is ubiquitous in the Maritime Continent (MC) but poorly predicted numerical weather prediction (NWP) models. NWP evaluation against accurate measures of heavy precipitation is essential to improve their forecasting skill. Here we examine the potential utility of the Global Precipitation Measurement (GPM) Integrated Multi-Satellite Retrieval for GPM (IMERG) for NWP evaluation of extreme precipitation in the MC. For that purpose, we use radar data in Subang (Malaysia) and station data from the Global Historical Climatology Network (GHCN) in Malaysia and the Philippines. We find that earlier studies may have underestimated IMERG performances in the MC due to large spatial sampling errors of ground precipitation measurements, especially during extreme precipitation conditions. We recommend using the 95th percentile for NWP evaluation of extreme daily and sub-daily precipitation against IMERG. At higher percentiles, the IMERG rainfall rates tend to diverge from ground observation and may therefore be treated with caution.


Life ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 1164
Author(s):  
Bruno Mendes ◽  
Inês Domingues ◽  
Augusto Silva ◽  
João Santos

pca is mostly asymptomatic at an early stage and often painless requiring active surveillance screening. trus is the principal method to diagnose pca following a histological examination by observing cell pattern irregularities and assigning the gs according to the recommended guidelines. This procedure presents sampling errors and, being invasive may cause complications to the patients. ebrt is presented as curative option for localised and locally advanced disease, as a palliative option for metastatic low-volume disease or after prostatectomy for prostate bed and pelvic nodes salvage. In the ebrt worflow a ct scan is performed as the basis for dose calculations and volume delineations. In this work, we evaluated the use of data-characterization algorithms (radiomics) from ct images for pca aggressiveness assessment. The fundamental motivation relies on the wide availability of ct images and the need to provide tools to assess ebrt effectiveness. We used Pyradiomics and lifex to extract features and search for a radiomic signature within ct images. Finnaly, when applying pcan to the features, we were able to show promising results.


Sign in / Sign up

Export Citation Format

Share Document