source of error
Recently Published Documents


TOTAL DOCUMENTS

972
(FIVE YEARS 66)

H-INDEX

43
(FIVE YEARS 2)

Author(s):  
Thomas Peters ◽  
Robert Creutznacher ◽  
Thorben Maass ◽  
Alvaro Mallagaray ◽  
Patrick Ogrissek ◽  
...  

Infection with human noroviruses requires attachment to histo blood group antigens (HBGAs) via the major capsid protein VP1 as a primary step. Several crystal structures of VP1 protruding domain dimers, so called P-dimers, complexed with different HBGAs have been solved to atomic resolution. Corresponding binding affinities have been determined for HBGAs and other glycans exploiting different biophysical techniques, with mass spectrometry (MS) and nuclear magnetic resonance (NMR) spectroscopy being most widely used. However, reported binding affinities are inconsistent. At the extreme, for the same system MS detects binding whereas NMR spectroscopy does not, suggesting a fundamental source of error. In this short essay, we will explain the reason for the observed differences and compile reliable and reproducible binding affinities. We will then highlight how a combination of MS techniques and NMR experiments affords unique insights into the process of HBGA binding by norovirus capsid proteins.


2021 ◽  
Author(s):  
Jean-Claude Roger ◽  
Eric Vermote ◽  
Sergii Skakun ◽  
Emilie Murphy ◽  
Oleg Dubovik ◽  
...  

Abstract. Aerosols play a critical role in radiative transfer within the atmosphere, and they have a significant impact on climate change. As part of the validation of atmospheric correction of remote sensing data affected by the atmosphere, it is critical to utilize appropriate aerosol models as aerosols are a main source of error. In this paper, we propose and demonstrate a framework for building and identifying an aerosol model. For this purpose, we define the aerosol model by recalculating the aerosol microphysical properties (Cvf, Cvc, %Cvf, %Cvc, rvf, rvc, σr, σc, nr440, nr650, nr850, nr1020, ni440, ni650, ni850, ni1020, %Sph) based on the optical thickness at 440 nm τ440 and the Ångström coefficient α440–870 obtained from numerous AERosol RObotic NETwork (AERONET) sites. Using aerosol microphysical properties provided by the AERONET dataset, we were able to evaluate our own retrieved microphysical properties. The associated uncertainties are up to 23 %, except for the challenging, imaginary part of the refractive index ni (about 38 %). Uncertainties of the retrieved aerosol microphysical properties were incorporated in the framework for validating surface reflectance derived from space-borne Earth observation sensors. Results indicate that the impact of aerosol microphysical properties varies 3.5 × 10−5 to 10−3 in reflectance units. Finally, the uncertainties of the microphysical properties yielded an overall uncertainty of approximately of 1 to 3 % of the retrieved surface reflectance in the MODIS red spectral band (620–670 nm), which corresponds to the specification used for atmospheric correction.


2021 ◽  
Vol 11 (22) ◽  
pp. 10867
Author(s):  
Larissa Fradkin ◽  
Sevda Uskuplu Altinbasak ◽  
Michel Darmon

Crack characterisation is one of the central tasks of NDT&E (the Non-Destructive Testing and Evaluation) of industrial components and structures. These days data necessary for carrying out this task are often collected using ultrasonic phased arrays. Many ultrasonic phased array inspections are automated but interpretation of the data they produce is not. This paper offers an approach to designing an explainable AI (Augmented Intelligence) to meet this challenge. It describes a C code called AutoNDE, which comprises a signal-processing module based on a modified total focusing method that creates a sequence of two-dimensional images of an evaluated specimen; an image-processing module, which filters and enhances these images; and an explainable AI module—a decision tree, which selects images of possible cracks, groups those of them that appear to represent the same crack and produces for each group a possible inspection report for perusal by a human inspector. AutoNDE has been trained on 16 datasets collected in a laboratory by imaging steel specimens with large smooth planar notches, both embedded and surface-breaking. It has been tested on two other similar datasets. The paper presents results of this training and testing and describes in detail an approach to dealing with the main source of error in ultrasonic data—undulations in the specimens’ surfaces.


Author(s):  
Dr. Carolina Diamandis ◽  
Yousef Abbas ◽  
Jonathan Feldman

Testing for microscopic blood traces, protein and other parameters by means of test strips has been a standard method in general medical and urological practices for years, but also in private settings. We report a potential source of error in the measurement of protein content of urine in men which has never been addressed in the manufacturer’s instructions or in other publications.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Amir H. Karamlou ◽  
William A. Simon ◽  
Amara Katabarwa ◽  
Travis L. Scholten ◽  
Borja Peropadre ◽  
...  

AbstractIn the near-term, hybrid quantum-classical algorithms hold great potential for outperforming classical approaches. Understanding how these two computing paradigms work in tandem is critical for identifying areas where such hybrid algorithms could provide a quantum advantage. In this work, we study a QAOA-based quantum optimization approach by implementing the Variational Quantum Factoring (VQF) algorithm. We execute experimental demonstrations using a superconducting quantum processor, and investigate the trade off between quantum resources (number of qubits and circuit depth) and the probability that a given biprime is successfully factored. In our experiments, the integers 1099551473989, 3127, and 6557 are factored with 3, 4, and 5 qubits, respectively, using a QAOA ansatz with up to 8 layers and we are able to identify the optimal number of circuit layers for a given instance to maximize success probability. Furthermore, we demonstrate the impact of different noise sources on the performance of QAOA, and reveal the coherent error caused by the residual ZZ-coupling between qubits as a dominant source of error in a near-term superconducting quantum processor.


Author(s):  
Abigail Niesen ◽  
Maury Hull

Abstract In radiostereometric analysis (RSA), continuous migration denoted as ΔMTPM is the difference between maximum total point motion (MTPM) at 2 years relative to time zero and MTPM at 1 year relative to time zero. Continuous migration has been used to diagnose tibial baseplates as stable versus unstable when compared to a specified stability limit (i.e. value of ΔMTPM). If the same point experiences MTPM at 2 years and at 1 year (usually the case for marker-based RSA), then an implicit assumption is that the migration path between 2 years and 1 year is the same as the path between 1 year and time zero. This paper uses vector analysis to demonstrate a source of error in ΔMTPM not previously recognized and estimates the error magnitude based on the interplay of independent variables which affect the error. The two independent variables which affect the error are the angle between the two migration vectors (i.e., MTPM between time zero and 2 years and MTPM between time zero and 1 year) and the difference in magnitude of the two vectors. The relative error increased in an absolute sense as the angle between the vectors increased and decreased for larger differences in the magnitudes of the two vectors. For magnitude ratios ranging from 1.25 to 2, relative errors ranged from -21% to -3% at 10° and from -78% to -42% at 60°, respectively. Knowledge of these errors highlights a limitation in the use of ΔMTPM not previously recognized.


2021 ◽  
Vol 7 (2) ◽  
pp. 136-139
Author(s):  
Tianbao Zheng ◽  
Luca Azzolin ◽  
Jorge Sánchez ◽  
Olaf Dössel ◽  
Axel Loewe

Abstract Modeling the 'digital twin' of a patient's heart has gained traction in the last years and helps to understand the pathogenic mechanisms of cardiovascular disease to pave the way for personalized therapies. Although a 3D patient-specific model (PSM) can be obtained from computed tomography (CT) or magnetic resonance imaging (MRI), the fiber orientation of cardiac muscle, which significantly affects the electrophysiological and mechanical characteristics of the heart, can hardly be obtained in vivo. Several approaches have been suggested to solve this problem. However, most of them require a considerable amount of human interaction, which is both time-consuming and a potential source of error. In this work, a highly automated pipeline based on a Laplace- Dirichlet-rule-based method (LDRBM) for annotating fibers and anatomical regions in both atria is introduced. The calculated fiber arrangement was regionally compared with anatomical observations from literature and faithfully reproduced clinical and experimental data.


2021 ◽  
Vol 9 (3) ◽  
Author(s):  
Zeyu Yu ◽  
Akshay Jakkidi Reddy ◽  
Himanshu Wagh

The objective of this review is to determine the difference in caffeine content in the coffee beans from different brands that are available in Costco. Two different popular coffee bean brands were bought and tested to determine which brand would have the highest caffeine content and their relative popularity among consumers. The extraction DMC method was conducted by using chemicals such as calcium carbonate, water, and DMC. The same amount of coffee beans were boiled with water until highly concentrated solutions were formed. Extraction funnel was utilized to wash out caffeine. Then, the recrystallization and vacuum filtration was utilized to obtain caffeine in solid form. The identity of the product along with the purity of the product was determined using melting temp, IR-spectroscopy, UV-vis spectrum, and TLC plating. The mass of caffeine produced from individual coffee brands were measured and compared. It was hypothesized that robusta coffee beans would yield more caffeine than arabica coffee beans. The expected results verify those claims as the data demonstrates that the amount of caffeine extracted from 10 grams of robusta coffee would be around .8021 grams, while the amount of caffeine extracted from 10 grams of arabica coffee would be around .4321 grams. The IR graph, UV-vis graph, and TLC plate were conducted to verify the identity of the product. The predicted IR graph, UV-vis graph, and TLC plate closely matched with the literature values, which indicates that the product produced is pure caffeine. One source of error that could skew the data could be the presence of impurities from the coffee beans that react in solution while we are trying to extract the caffeine. The broader impact of this review is that by understanding the caffeine content in different products, the medical and scientific field can further determine the difference in health effects between excess and optimal caffeine consumption to the human body. Additionally, scientists can research various medical usages of caffeine to help different patients with sleep disorders.


2021 ◽  
Author(s):  
Aleksandr Zakharchenko ◽  
Besma Khaireddine ◽  
Ali Mili

Software product faults are an inevitable and an undesirable byproduct of any software development. Often hard to detect they are a major contributing factor to the overall development and support costs and a source of technical risk for the application as a whole. The criticality of the impact has resulted in several decades of non-stop iterative improvements, aimed at avoiding and detecting the faults through development and application of sophisticated automated testing and validation systems, Finding the exact source of error, creating a patch to fix it and validating it for production release is still a highly manual activity. In this paper we build upon the theoretical framework of relative correctness, which we have laid out in our previous work, and present a massively parallel automated tool implementing it in order to support root cause analysis and patch generation.


Author(s):  
F. Riva ◽  
U. Buck ◽  
K. Buße ◽  
R. Hermsen ◽  
E. J. A. T. Mattijssen ◽  
...  

AbstractThis study explores the magnitude of two sources of error that are introduced when extracorporeal bullet trajectories are based on post-mortem computed tomography (PMCT) and/or surface scanning of a body. The first source of error is caused by an altered gravitational pull on soft tissue, which is introduced when a body is scanned in another position than it had when hit. The second source of error is introduced when scanned images are translated into a virtual representation of the victim’s body. To study the combined magnitude of these errors, virtual shooting trajectories with known vertical angles through five “victims” (live test persons) were simulated. The positions of the simulated wounds on the bodies were marked, with the victims in upright positions. Next, the victims were scanned in supine position, using 3D surface scanning, similar to a body’s position when scanned during a PMCT. Seven experts, used to working with 3D data, were asked to determine the bullet trajectories based on the virtual representations of the bodies. The errors between the known and determined trajectories were analysed and discussed. The results of this study give a feel for the magnitude of the introduced errors and can be used to reconstruct actual shooting incidents using PMCT data.


Sign in / Sign up

Export Citation Format

Share Document