scholarly journals Which particles to select, and if yes, how many?

Author(s):  
Christian Schwaferts ◽  
Patrick Schwaferts ◽  
Elisabeth von der Esch ◽  
Martin Elsner ◽  
Natalia P. Ivleva

AbstractMicro- and nanoplastic contamination is becoming a growing concern for environmental protection and food safety. Therefore, analytical techniques need to produce reliable quantification to ensure proper risk assessment. Raman microspectroscopy (RM) offers identification of single particles, but to ensure that the results are reliable, a certain number of particles has to be analyzed. For larger MP, all particles on the Raman filter can be detected, errors can be quantified, and the minimal sample size can be calculated easily by random sampling. In contrast, very small particles might not all be detected, demanding a window-based analysis of the filter. A bootstrap method is presented to provide an error quantification with confidence intervals from the available window data. In this context, different window selection schemes are evaluated and there is a clear recommendation to employ random (rather than systematically placed) window locations with many small rather than few larger windows. Ultimately, these results are united in a proposed RM measurement algorithm that computes confidence intervals on-the-fly during the analysis and, by checking whether given precision requirements are already met, automatically stops if an appropriate number of particles are identified, thus improving efficiency.

Animals ◽  
2021 ◽  
Vol 11 (5) ◽  
pp. 1445
Author(s):  
Mauro Giammarino ◽  
Silvana Mattiello ◽  
Monica Battini ◽  
Piero Quatto ◽  
Luca Maria Battaglini ◽  
...  

This study focuses on the problem of assessing inter-observer reliability (IOR) in the case of dichotomous categorical animal-based welfare indicators and the presence of two observers. Based on observations obtained from Animal Welfare Indicators (AWIN) project surveys conducted on nine dairy goat farms, and using udder asymmetry as an indicator, we compared the performance of the most popular agreement indexes available in the literature: Scott’s π, Cohen’s k, kPABAK, Holsti’s H, Krippendorff’s α, Hubert’s Γ, Janson and Vegelius’ J, Bangdiwala’s B, Andrés and Marzo’s ∆, and Gwet’s γ(AC1). Confidence intervals were calculated using closed formulas of variance estimates for π, k, kPABAK, H, α, Γ, J, ∆, and γ(AC1), while the bootstrap and exact bootstrap methods were used for all the indexes. All the indexes and closed formulas of variance estimates were calculated using Microsoft Excel. The bootstrap method was performed with R software, while the exact bootstrap method was performed with SAS software. k, π, and α exhibited a paradoxical behavior, showing unacceptably low values even in the presence of very high concordance rates. B and γ(AC1) showed values very close to the concordance rate, independently of its value. Both bootstrap and exact bootstrap methods turned out to be simpler compared to the implementation of closed variance formulas and provided effective confidence intervals for all the considered indexes. The best approach for measuring IOR in these cases is the use of B or γ(AC1), with bootstrap or exact bootstrap methods for confidence interval calculation.


2014 ◽  
Vol 3 (4) ◽  
pp. 130
Author(s):  
NI MADE METTA ASTARI ◽  
NI LUH PUTU SUCIPTAWATI ◽  
I KOMANG GDE SUKARSA

Statistical analysis which aims to analyze a linear relationship between the independent variable and the dependent variable is known as regression analysis. To estimate parameters in a regression analysis method commonly used is the Ordinary Least Square (OLS). But the assumption is often violated in the OLS, the assumption of normality due to one outlier. As a result of the presence of outliers is parameter estimators produced by the OLS will be biased. Bootstrap Residual is a bootstrap method that is applied to the residual resampling process. The results showed that the residual bootstrap method is only able to overcome the bias on the number of outliers 5% with 99% confidence intervals. The resulting parameters estimators approach the residual bootstrap values ??OLS initial allegations were also able to show that the bootstrap is an accurate prediction tool.


Psychiatry ◽  
2020 ◽  
Vol 18 (2) ◽  
pp. 6-12
Author(s):  
A. N. Simonov ◽  
T. P. Klyushnik ◽  
S. A. Zozulya

A leukocyte-inhibitory index (LII) is the ratio of the proteolytic enzyme leukocyte elastase (LE) to its inhibitor, an α1- proteinase inhibitor (α1-PI). LII characterizes the activity of the proteolytic system and can be considered as a potential objective criterion that determines both the course and the outcome of the disease. The changes of LII in schizophrenia patients with clinically diagnosed asthenia (schizoasthenia) and patients with schizophrenia without clinical signs of this syndrome were revealed. The objective: to study the possibility of the 95% confidence intervals for a comparative assessment of LII in patients with schizoasthenia and patients with schizophrenia without clinical signs of asthenic syndrome to obtain correct statistical conclusions. Patients and methods: Overall, 95 patients aged 20–55 years with paroxysmal-progressive (F20.x1) and paranoid (F20.00) schizophrenia were examined: 61 patients in the total sample were clinically diagnosed with asthenic symptom-complex. The enzymatic activity of LE and the functional activity of α1-PI were determined in blood serum. LII was calculated according to the formula. The confidence intervals were built using 4 different methods: Fieller’s theorem, delta method, regression methods and bootstrap method. Results: the statistical analysis indicates that the 95% confidence intervals of these indicators for the examined patient groups do not overlap. Therefore, these indicators relate to different populations, which mean the examined groups are characterized by different variants of the ratio of the proteolytic system components. Conclusion: the assessment of LII can serve as an objective statistically correct criterion for presence or absence of asthenic disorder in patients with schizophrenia in addition to clinical examination.


Author(s):  
Yasuhiro Saito ◽  
Tadashi Dohi

Non-Homogeneous Gamma Process (NHGP) is characterized by an arbitrary trend function and a gamma renewal distribution. In this paper, we estimate the confidence intervals of model parameters of NHGP via two parametric bootstrap methods: simulation-based approach and re-sampling-based approach. For each bootstrap method, we apply three methods to construct the confidence intervals. Through simulation experiments, we investigate each parametric bootstrapping and each construction method of confidence intervals in terms of the estimation accuracy. Finally, we find the best combination to estimate the model parameters in trend function and gamma renewal distribution in NHGP.


Author(s):  
Yalin Jiao ◽  
Yongmin Zhong ◽  
Shesheng Gao ◽  
Bijan Shirinzadeh

This paper presents a new random weighting method for estimation of one-sided confidence intervals in discrete distributions. It establishes random weighting estimations for the Wald and Score intervals. Based on this, a theorem of coverage probability is rigorously proved by using the Edgeworth expansion for random weighting estimation of the Wald interval. Experimental results demonstrate that the proposed random weighting method can effectively estimate one-sided confidence intervals, and the estimation accuracy is much higher than that of the bootstrap method.


1986 ◽  
Vol 30 ◽  
pp. 45-51 ◽  
Author(s):  
Monte C. Nichols ◽  
Dale R. Boehme ◽  
Richard W. Ryon ◽  
David Wherry ◽  
Brian Cross ◽  
...  

AbstractX-ray Microfluorescence (XRMF) analysis uses a finely collimated beam of X-rays to excite fluorescent radiation in a sample (Nichols & Ryon 1986). Characteristic fluorescent radiation emanating from the small interaction volume element is acquired using an energy dispersive detector placed in close proximity to the sample. The signal from the detector is processed using a computer-based multi-channel analyzer.XRMF imaging is accomplished by translating the sample through the small X-ray beam in a step or continuous raster mode. As the sample is translated, a pixel by pixel X-ray intensity image is formed for each chemical element in the sample. The resulting digitized image information for each element is stored for subsequent processing and/or display. The images, in the form of elemental maps representing identical areas, may be displayed and color coded by element and/or intensity and then overlayed for spatial correlation.The present study of parameters affecting the performance of an X-ray microfluorescence system has shown how such systems use X-ray beams with effective spot sizes less than 100 micrometers to bridge the gap in analytical capabilities between predominately surface micro analytical techniques such as SEM/EDX and bulk analytical methods such as standard XRF analysis. The combination of XRMF spectroscopy with digital imaging allows chemical information to be obtained and mapped from surface layers as well as from layers or structures beneath the sample surface. Simultaneously, it provides valuable high resolution chemical information in a readily interpreted visual form which displays the homogeneity within a given layer or structure. XRMF systems retain the advantages of minimal sample preparation, non-destructive analysis and high sensitivity inherent to XRF methods.


2010 ◽  
Vol 14 (11) ◽  
pp. 2229-2242 ◽  
Author(s):  
A. Viglione

Abstract. The coefficient of L-variation (L-CV) is commonly used in statistical hydrology, in particular in regional frequency analysis, as a measure of steepness for the frequency curve of the hydrological variable of interest. As opposed to the point estimation of the L-CV, in this work we are interested in the estimation of the interval of values (confidence interval) in which the L-CV is included at a given level of probability (confidence level). Several candidate distributions are compared in terms of their suitability to provide valid estimators of confidence intervals for the population L-CV. Monte-Carlo simulations of synthetic samples from distributions frequently used in hydrology are used as a basis for the comparison. The best estimator proves to be provided by the log-Student t distribution whose parameters are estimated without any assumption on the underlying parent distribution of the hydrological variable of interest. This estimator is shown to also outperform the non parametric bias-corrected and accelerated bootstrap method. An illustrative example of how this result can be used in hydrology is presented, namely in the comparison of methods for regional flood frequency analysis. In particular, it is shown that the confidence intervals for the L-CV can be used to assess the amount of spatial heterogeneity of flood data not explained by regionalization models.


2020 ◽  
Vol 68 (3) ◽  
pp. 949-964
Author(s):  
Dimitris Bertsimas ◽  
Bradley Sturt

The bootstrap method is one of the major developments in statistics in the 20th century for computing confidence intervals directly from data. However, the bootstrap method is traditionally approximated with a randomized algorithm, which can sometimes produce inaccurate confidence intervals. In “Computation of Exact Bootstrap Confidence Intervals: Complexity and Deterministic Algorithms,” Bertsimas and Sturt present a new perspective of the bootstrap method through the lens of counting integer points in a polyhedron. Through this perspective, the authors develop the first computational complexity results and efficient deterministic approximation algorithm (fully polynomial time approximation scheme) for bootstrap confidence intervals, which unlike traditional methods, has guaranteed bounds on its error. In experiments on real and synthetic data sets from clinical trials, the proposed deterministic algorithms quickly produce reliable confidence intervals, which are significantly more accurate than those from randomization.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2508
Author(s):  
Wilson Ombati Nyang’au ◽  
Andi Setiono ◽  
Angelika Schmidt ◽  
Harald Bosse ◽  
Erwin Peiner

Liquid-borne particles sampling and cantilever-based mass detection are widely applied in many industrial and scientific fields e.g., in the detection of physical, chemical, and biological particles, and disease diagnostics, etc. Microscopic analysis of particles-adsorbed cantilever-samples can provide a good basis for measurement comparison. However, when a particles-laden droplet on a solid surface is vaporized, a cluster-ring deposit is often yielded which makes particles counting difficult or impractical. Nevertheless, in this study, we present an approach, i.e., on-cantilever particles imprinting, which effectively defies such odds to sample and deposit countable single particles on a sensing surface. Initially, we designed and fabricated a triangular microcantilever sensor whose mass m0, total beam-length L, and clamped-end beam-width w are equivalent to that of a rectangular/normal cantilever but with a higher resonant frequency (271 kHz), enhanced sensitivity (0.13 Hz/pg), and quality factor (~3000). To imprint particles on these cantilever sensors, various calibrated stainless steel dispensing tips were utilized to pioneer this study by dipping and retracting each tip from a small particle-laden droplet (resting on a hydrophobic n-type silicon substrate), followed by tip-sensor-contact (at a target point on the sensing area) to detach the solution (from the tip) and adsorb the particles, and ultimately determine the particles mass concentration. Upon imprinting/adsorbing the particles on the sensor, resonant frequency response measurements were made to determine the mass (or number of particles). A minimum detectable mass of ~0.05 pg was demonstrated. To further validate and compare such results, cantilever samples (containing adsorbed particles) were imaged by scanning electron microscopy (SEM) to determine the number of particles through counting (from which, the lowest count of about 11 magnetic polystyrene particles was obtained). The practicality of particle counting was essentially due to monolayer particle arrangement on the sensing surface. Moreover, in this work, the main measurement process influences are also explicitly examined.


2007 ◽  
Vol 90 (5) ◽  
pp. 1354-1364 ◽  
Author(s):  
Brendon D Gill ◽  
Harvey E Indyk

Abstract Nucleotides and nucleosides play important roles as structural units in nucleic acids, as coenzymes in biochemical pathways, and as sources of chemical energy. Milk contains a complex mixture of nucleotides, nucleosides, and nucleobases, and because of the reported differences in their relative levels in bovine and human milks, pediatric formulas are increasingly supplemented with nucleotides. Liquid chromatography is the dominant analytical technique used for the quantitation of nucleospecies and is commonly applied using either ion-exchange, reversed-phase, or ion-pair reversed-phase modes. Robust methods that incorporate minimal sample preparation and rapid chromatographic separations have been developed for routine product compliance analysis. This review summarizes the analytical techniques used to date in the analysis of nucleospecies in bovine and human milks and infant formulas.


Sign in / Sign up

Export Citation Format

Share Document