experimental error
Recently Published Documents


TOTAL DOCUMENTS

701
(FIVE YEARS 56)

H-INDEX

44
(FIVE YEARS 3)

Coatings ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 1519
Author(s):  
John Canning ◽  
Caspar Clark ◽  
Monica Dayao ◽  
Daniel de LaMela ◽  
Michael Logozzo ◽  
...  

The use of anti-reflection coatings on 3D-printed components to reduce both Fresnel reflections and scattering is explored. Two similar photo-initiated acrylic commercial material structures, known as Standard Clear (SC: T~60% @ λ = 800 nm) and VeroClear (VC: T~90% @ λ = 800 nm), used specifically for optical components, are examined. The refractive indices for slab samples~(5 × 5 × 0.7) cm are measured at λ = 650 nm and averaged over the slab area: n(SC)~(1.49 ± 0.04) and n(VC)~(1.42 ± 0.03). Within experimental error, novel Shore D mapping is used to show hardness distribution across the surface flats, with VC slightly harder than SC, where VC = 85.9 ± 0.3 and SC = 84.4 ± 1.3, indicating uniform hardness. A TiO2/MgF2 anti-reflection twin-layer coating is deposited onto one side of an unpolished SC slab and binds well, passing standard peeling and humidity tests. Shore hardness increases to SCCOATED = 87.5 ± 1.5. It is found to reduce the measured Fresnel reflection and surface scatter by~65% without requiring major polishing, paving the way for lower-cost high-quality optics. The demonstration of successful anti-reflection coatings will benefit all 3D-printed component finishes, permitting viable film deposition more broadly.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Scott S. Kolmar ◽  
Christopher M. Grulke

AbstractA key challenge in the field of Quantitative Structure Activity Relationships (QSAR) is how to effectively treat experimental error in the training and evaluation of computational models. It is often assumed in the field of QSAR that models cannot produce predictions which are more accurate than their training data. Additionally, it is implicitly assumed, by necessity, that data points in test sets or validation sets do not contain error, and that each data point is a population mean. This work proposes the hypothesis that QSAR models can make predictions which are more accurate than their training data and that the error-free test set assumption leads to a significant misevaluation of model performance. This work used 8 datasets with six different common QSAR endpoints, because different endpoints should have different amounts of experimental error associated with varying complexity of the measurements. Up to 15 levels of simulated Gaussian distributed random error was added to the datasets, and models were built on the error laden datasets using five different algorithms. The models were trained on the error laden data, evaluated on error-laden test sets, and evaluated on error-free test sets. The results show that for each level of added error, the RMSE for evaluation on the error free test sets was always better. The results support the hypothesis that, at least under the conditions of Gaussian distributed random error, QSAR models can make predictions which are more accurate than their training data, and that the evaluation of models on error laden test and validation sets may give a flawed measure of model performance. These results have implications for how QSAR models are evaluated, especially for disciplines where experimental error is very large, such as in computational toxicology. Graphical Abstract


Author(s):  
Eliott Rosenberg ◽  
Paul Ginsparg ◽  
Peter L. McMahon

Abstract Quantum computers have the potential to help solve a range of physics and chemistry problems, but noise in quantum hardware currently limits our ability to obtain accurate results from the execution of quantum-simulation algorithms. Various methods have been proposed to mitigate the impact of noise on variational algorithms, including several that model the noise as damping expectation values of observables. In this work, we benchmark various methods, including a new method proposed here. We compare their performance in estimating the ground-state energies of several instances of the 1D mixed-field Ising model using the variational-quantum-eigensolver algorithm with up to 20 qubits on two of IBM's quantum computers. We find that several error-mitigation techniques allow us to recover energies to within 10% of the true values for circuits containing up to about 25 ansatz layers, where each layer consists of CNOT gates between all neighboring qubits and Y-rotations on all qubits.


2021 ◽  
Author(s):  
Kazutoshi Takahashi ◽  
Chikako Okubo ◽  
Michiko Nakamura ◽  
Mio Iwasaki ◽  
Yuka Kawahara ◽  
...  

Xeno-free culture systems have expanded the clinical and industrial application of human pluripotent stem cells (PSCs). However, yet some problems, such as the reproducibility among the experiments, remain. Here we describe an improved method for the subculture of human PSCs. The revised method significantly enhanced the viability of human PSCs by lowering DNA damage and apoptosis, resulting in more efficient and reproducible downstream applications such as gene editing, gene delivery, and directed differentiation. Furthermore, the method did not alter PSC characteristics after long-term culture and attenuated the growth advantage of abnormal subpopulations. This robust passaging method minimizes experimental error and reduces the rate of PSCs failing quality control of human PSC research and application.


2021 ◽  
pp. 004051752110308
Author(s):  
Yuanying Shen ◽  
Jie Ni ◽  
Jianping Yang ◽  
Chongwen Yu

The dynamic motion of floating fibers in the drafting process, which can be characterized by fiber accelerated points, has an important effect on the sliver or yarn quality. In this study, the fiber accelerated point during the roller drafting process has been tested with an improved method. In this method, tracer fibers and standard tracer yarns of known length were embedded into the sliver in groups. By adjusting the length and fineness of the standard tracer yarns, it was possible to determine the nip line of the front roller dynamically. Therefore, the fiber accelerated points can be obtained in a continuous drafting process without an external sensor, which is simpler and more convenient, and avoids the experimental error caused by the drawing frame shutdown during the experiment. Based on this method, the effects of the drafting parameters and sliver properties on the fiber accelerated point in the roller drafting process have been investigated. In addition, the coefficient of variance of the sliver ( CVFAP) caused by the fiber accelerated point variation during the drafting process was also calculated. A comparison has been made between CVFAP and the standard deviation of the fiber accelerated points. It is found that a fairly good agreement between these two is seen, and this agreement can also evidence the accuracy of experimental results about the fiber acceleration point.


2021 ◽  
Vol 11 (2) ◽  
pp. 99-107
Author(s):  
Vladimir Ivanovskiy

The processes of chipless deformation of wood by means of a force field are less laborious and are not as-sociated with significant energy consumption and irrecoverable waste, as in the processing of such wood by chip cutting. To determine the cutting force on a laboratory setup, a factorial experiment was carried out for two main factors: the diameter of the cutting circle and the sharpening angle of the circular blade. Then, to estimate the va-riance characterizing the experimental error, a separate series of 5 experiments was set in the center of the plan, i.e. in conditions where each factor varies at the basic level. A mathematical model in natural values of the factors for the force deformation of wood by a disk has been obtained. Further, the following influencing factors have been investigated: wood thickness and feed rate. The analysis of the obtained regression equation indicates that the thickness of the cut workpiece has a major influence on the cutting power, which imposes a limitation on the feed rate. The next 4-factor experiment made it possible to reveal the combined influence of the named factors, as well as the moisture content of the wood and the length of the cut, on the quality indicators of the cutting process with circular knives. The analysis of the adequacy of the obtained regression equation showed its high accuracy and made it possible to reveal the influence of dominant external factors on the quality of the separated surfaces


2021 ◽  
Vol 2021 (7) ◽  
Author(s):  
Marvin Gerlach ◽  
Ulrich Nierste ◽  
Vladyslav Shtabovenko ◽  
Matthias Steinhauser

Abstract We consider two-loop QCD corrections to the element $$ {\Gamma}_{12}^q $$ Γ 12 q of the decay matrix in Bq−$$ {\overline{B}}_q $$ B ¯ q mixing, q = d, s, in the leading power of the Heavy Quark Expansion. The calculated contributions involve one current-current and one penguin operator and constitute the next step towards a theory prediction for the width difference ∆Γs matching the precise experimental data. We present compact analytic results for all matching coefficients in an expansion in mc/mb up to second order. Our new corrections are comparable in size to the current experimental error and slightly increase ∆Γs.


2021 ◽  
Author(s):  
N.N. Shuliko ◽  

The article presents the results of studies of the effect of the fertilizer application on the enzyme activity of the barley rhizosphere in the conditions of the southern forest steppe of Western Siberia. The activity of the catalase enzyme decreased under the application of the studied factors up to 15 %, in comparison to the control. Under the influence of mineral fertilizers, there was a tendency for the increase of urease activity up to 17 %, in comparison to the control. The changes in soil invertase activity under the influence of the studied factors were within the experimental error.


2021 ◽  
Author(s):  
Lewis Mervin ◽  
Maria-Anna Trapotsi ◽  
Avid M. Afzal ◽  
Ian Barrett ◽  
Andreas Bender ◽  
...  

<p>In the context of small molecule property prediction, experimental errors are usually a neglected aspect during model generation. The main caveat to binary classification approaches is that they weight minority cases close to the threshold boundary equivalently in distinguishing between activity classes. For example, a pXC50 activity value of 5.1 or 4.9 are treated equally important in contributing to the opposing activity (e.g., classification threshold of 5), even though experimental error may not afford such discriminatory accuracy. This is detrimental in practice and therefore it is equally important to evaluate the presence of experimental error in databases and apply methodologies to account for variability in experiments and uncertainty near the decision boundary.<br></p><p></p><p> </p><p>In order to improve upon this, we herein present a novel approach toward predicting protein-ligand interactions using a Probabilistic Random Forest (PRF) classifier. The PRF comprises a modification to the long-established Random Forest (RF), to take into account uncertainties in the assigned classes (i.e., activity labels). This enables representing the activity in a framework in-between the classification and regression architecture, with philosophical differences from either approach. Compared to classification, this approach enables better representation of factors increasing/decreasing inactivity. Conversely, one can utilize all data (even delimited/operand/censored data far from a cut-off) at the same time as taking into account the granularity around the cut-off, compared to a classical regression framework. The algorithm was applied toward ~550 target prediction tasks from ChEMBL and PubChem. The largest benefit in incorporating the experimental deviation in PRF was observed for data points close to the binary threshold boundary, when such information is not considered in any way in the original RF algorithm. In comparison, the baseline RF outperformed PRF for cases with high confidence to belong to the active class (far from the binary decision threshold). The RF models gave errors smaller than the experimental uncertainty, which could indicate that they are <i>overtrained</i> and/or <i>over-confident</i>. Overall, we show that PRF can be useful for target prediction models in particular for data where class boundaries overlap with the measurement uncertainty, and where a substantial part of the training data is located close to the classification threshold. With this approach, we present, to our knowledge, for the first time an application of probabilistic modelling of activity data for target prediction using the PRF algorithm.</p>


2021 ◽  
Author(s):  
Lewis Mervin ◽  
Maria-Anna Trapotsi ◽  
Avid M. Afzal ◽  
Ian Barrett ◽  
Andreas Bender ◽  
...  

<p>In the context of small molecule property prediction, experimental errors are usually a neglected aspect during model generation. The main caveat to binary classification approaches is that they weight minority cases close to the threshold boundary equivalently in distinguishing between activity classes. For example, a pXC50 activity value of 5.1 or 4.9 are treated equally important in contributing to the opposing activity (e.g., classification threshold of 5), even though experimental error may not afford such discriminatory accuracy. This is detrimental in practice and therefore it is equally important to evaluate the presence of experimental error in databases and apply methodologies to account for variability in experiments and uncertainty near the decision boundary.<br></p><p></p><p> </p><p>In order to improve upon this, we herein present a novel approach toward predicting protein-ligand interactions using a Probabilistic Random Forest (PRF) classifier. The PRF comprises a modification to the long-established Random Forest (RF), to take into account uncertainties in the assigned classes (i.e., activity labels). This enables representing the activity in a framework in-between the classification and regression architecture, with philosophical differences from either approach. Compared to classification, this approach enables better representation of factors increasing/decreasing inactivity. Conversely, one can utilize all data (even delimited/operand/censored data far from a cut-off) at the same time as taking into account the granularity around the cut-off, compared to a classical regression framework. The algorithm was applied toward ~550 target prediction tasks from ChEMBL and PubChem. The largest benefit in incorporating the experimental deviation in PRF was observed for data points close to the binary threshold boundary, when such information is not considered in any way in the original RF algorithm. In comparison, the baseline RF outperformed PRF for cases with high confidence to belong to the active class (far from the binary decision threshold). The RF models gave errors smaller than the experimental uncertainty, which could indicate that they are <i>overtrained</i> and/or <i>over-confident</i>. Overall, we show that PRF can be useful for target prediction models in particular for data where class boundaries overlap with the measurement uncertainty, and where a substantial part of the training data is located close to the classification threshold. With this approach, we present, to our knowledge, for the first time an application of probabilistic modelling of activity data for target prediction using the PRF algorithm.</p>


Sign in / Sign up

Export Citation Format

Share Document