calibration error
Recently Published Documents


TOTAL DOCUMENTS

256
(FIVE YEARS 78)

H-INDEX

21
(FIVE YEARS 4)

Author(s):  
Linsheng Wang ◽  
Donghe Xi

Most of the vehicle cruise braking calibration algorithms only calibrate the distance, ignoring that the driver cannot control the vehicle braking in time under fatigue conditions. Therefore, an embedded CNC system is added to the vehicle cruise braking distance calibration algorithm to control the vehicle speed and prevent the vehicle from rear-end collisions. At this time, the CNC system uses incremental control to control the vehicle cruise braking. The reaction time model and braking distance calculation model under control increment are established. At the same time, air resistance and rolling resistance of cruise braking distance parameters are calculated. Cruise braking distance calibration is completed by integrating the two models, CNC system control increment, air resistance and rolling resistance parameters. The experimental analysis shows that the calibration error of the algorithm is within ±30cm and the calibration accuracy is high, which meets the practical application standard of cruise braking.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1608
Author(s):  
Benjamin Kompa ◽  
Jasper Snoek ◽  
Andrew L. Beam

Uncertainty quantification for complex deep learning models is increasingly important as these techniques see growing use in high-stakes, real-world settings. Currently, the quality of a model’s uncertainty is evaluated using point-prediction metrics, such as the negative log-likelihood (NLL), expected calibration error (ECE) or the Brier score on held-out data. Marginal coverage of prediction intervals or sets, a well-known concept in the statistical literature, is an intuitive alternative to these metrics but has yet to be systematically studied for many popular uncertainty quantification techniques for deep learning models. With marginal coverage and the complementary notion of the width of a prediction interval, downstream users of deployed machine learning models can better understand uncertainty quantification both on a global dataset level and on a per-sample basis. In this study, we provide the first large-scale evaluation of the empirical frequentist coverage properties of well-known uncertainty quantification techniques on a suite of regression and classification tasks. We find that, in general, some methods do achieve desirable coverage properties on in distribution samples, but that coverage is not maintained on out-of-distribution data. Our results demonstrate the failings of current uncertainty quantification techniques as dataset shift increases and reinforce coverage as an important metric in developing models for real-world applications.


Atmosphere ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 1601
Author(s):  
Houcai Chen ◽  
Junxiang Ge ◽  
Qingde Kong ◽  
Zhenwei Zhao ◽  
Qinglin Zhu

In this paper, we present the design and implementation tests of a water vapor radiometer (WVR) suitable for very long baseline interferometry (VLBI) observation. We describe the calibration method with an analysis of the sources of measurement errors. The experimental results show that the long-term measurement accuracy of the brightness temperature of the water vapor radiometer can reach 0.2 K under arbitrary ambient conditions by absolute calibration, receiver gain error calibration, and antenna feeder system temperature noise error calibration. Furthermore, we present a method for measurements of the calibration error of the oblique path measurement. This results in an oblique path wet delay measurement accuracy of the water vapor radiometer reaching 20 mm (within one month).


2021 ◽  
Author(s):  
Lan Zang ◽  
Kun Zhang ◽  
Chuan Tian ◽  
Chong Shen ◽  
Bhatti Uzair Aslam ◽  
...  

Abstract In order to solve the problems of low accuracy and unstable system performance existing in binocular vision alone, this paper proposes a threedimensional space recognition and positioning algorithm based on binocular stereo vision and deep learning algorithms. First, a binocular camera for Zhang Zhengyou calibrated by several adjustments, calibration error will eventually set at 0.10pixels best, select and SAD in block matching algorithm in the algorithm, the matching point of the search range reduction, mitigation data for subsequent experiments burden. Then input the three-dimensional spatial data calculated by using the binocular ”parallax” principle into the Faster R-CNN model for data training, extract and classify the target features, and finally realize real-time detection of the target object and its position coordinate information. The analysis of experimental data shows that when the best calibration error is selected and the number of data training is sufficient, the algorithm in this paper can effectively improve the quality of target detection. The positioning accuracy and target recognition rate are increased by about 3%-5%, and it can achieve faster fps.


Author(s):  
Shaan Khurshid ◽  
Samuel Friedman ◽  
Christopher Reeder ◽  
Paolo Di Achille ◽  
Nathaniel Diamant ◽  
...  

Background: Artificial intelligence (AI)-enabled analysis of 12-lead electrocardiograms (ECGs) may facilitate efficient estimation of incident atrial fibrillation (AF) risk. However, it remains unclear whether AI provides meaningful and generalizable improvement in predictive accuracy beyond clinical risk factors for AF. Methods: We trained a convolutional neural network ("ECG-AI") to infer 5-year incident AF risk using 12-lead ECGs in patients receiving longitudinal primary care at Massachusetts General Hospital (MGH). We then fit three Cox proportional hazards models, each composed of: a) ECG-AI 5-year AF probability, b) the Cohorts for Heart and Aging in Genomic Epidemiology AF (CHARGE-AF) clinical risk score, and c) terms for both ECG-AI and CHARGE-AF ("CH-AI"). We assessed model performance by calculating discrimination (area under the receiver operating characteristic curve, AUROC) and calibration in an internal test set and two external test sets (Brigham and Women's Hospital and UK Biobank). Models were recalibrated to estimate 2-year AF risk in the UK Biobank given limited available follow-up. We used saliency mapping to identify ECG features most influential on ECG-AI risk predictions and assessed correlation between ECG-AI and CHARGE-AF linear predictors. Results: The training set comprised 45,770 individuals (age 55±17 years, 53% women, 2,171 AF events), and the test sets comprised 83,162 individuals (age 59±13 years, 56% women, 2,424 AF events). AUROC was comparable using CHARGE-AF (MGH 0.802, 95% CI 0.767-0.836; BWH 0.752, 95% CI 0.741-0.763; UK Biobank 0.732, 95% CI 0.704-0.759) and ECG-AI (MGH 0.823, 95% CI 0.790-0.856; BWH 0.747, 95% CI 0.736-0.759; UK Biobank 0.705, 95% CI 0.673-0.737). AUROC was highest using CH-AI: MGH 0.838, 95% CI 0.807-0.869; BWH 0.777, 95% CI 0.766-0.788; UK Biobank 0.746, 95% CI 0.716-0.776). Calibration error was low using ECG-AI (MGH 0.0212; BWH 0.0129; UK Biobank 0.0035) and CH-AI (MGH 0.012; BWH 0.0108; UK Biobank 0.0001). In saliency analyses, the ECG P-wave had the greatest influence on AI model predictions. ECG-AI and CHARGE-AF linear predictors were correlated (Pearson r MGH 0.61, BWH 0.66, UK Biobank 0.41). Conclusions: AI-based analysis of 12-lead ECGs has similar predictive utility to a clinical risk factor model for incident AF and both approaches are complementary. ECG-AI may enable efficient quantification of future AF risk.


Blood ◽  
2021 ◽  
Vol 138 (Supplement 1) ◽  
pp. 275-275
Author(s):  
Niroshan Nadarajah ◽  
Erika Pelaez Coyotl ◽  
James Golden ◽  
Stephan Hutter ◽  
Tamas Madl ◽  
...  

Abstract Background: Currently, hematologic neoplasms are diagnosed using a combination of methods, which require complex equipment and highly skilled clinical laboratory scientists and technicians - scarce resources. WGS and WTS could streamline this process and become a singular method. Interpretation of WGS and WTS data in a diagnostic setting is extremely challenging due to the breadth of data and its high-dimensional data types. AI will be mandatory to identify clinically meaningful genetic patterns and produce unbiased diagnosis. Aim: Compute leukemia diagnosis using AI methods with WGS and WTS data only, depicting relevant features for a decision and thus making its results comprehensible and transparent to humans. Methods: To train the model we used our cohort of 4,689 samples both with WGS (100x coverage, 2x151bp) and WTS (50 mio reads/sample, 2x101bp), along with our independent final routine diagnosis based on gold standard techniques (GST) and following WHO guidelines. Single nucleotide variants (SNV), structural variants (SV) and copy number alterations (CNA) from WGS data using a tumor w/o normal pipeline and gene fusions (GF) and gene expression (GE) from WTS were extracted. The cohort comprised of 30 different neoplasms and was severely imbalanced (n: 20 - 773). To test its performance another independent cohort which was not used during model creation (n=202, 22 entities) was selected. Results: We trained an ensemble of multi-class classifier using SageMaker (AWS, Seattle, WA) based on LGBM implementation of gradient boosted decision trees (Ke et al, 2017) in a one vs. rest architecture (1vRA). The model accuracy reached 85% overall on a 5-fold cross-validation (Fig 1a). Since neighboring disease types such as MGUS/MM, MDS-EB-2/AML/CMML are in some cases difficult to classify correctly using GST, we trained entity-specific classifiers operating independently. Rather than forcing a single predicted class to be predicted as overwhelmingly likely, this architecture accounts for ambiguous entities. In addition to reflecting biological similarity, the 1vRA resulted in improved probability calibration, so that cases with ambiguous leukemias are more easily identifiable by the distribution of predicted class probabilities and flagged for a human. Expected calibration error was only 3.8% for the 1vRA with entity-specific components, compared to 8.7% for a single LGBM model. Typically, AI methods are black boxes, making it hard for a medical professional to understand predictions, which results in low confidence and acceptance of such systems. Thus, we particularly focused on the transparency aspect of the model. We employed the SHAP library (Lundberg et al, 2017) to retrace the models output and gain insight into the features (i.e. which SNV, SV, GE etc.) predominantly driving classification results (e.g. LPL case Fig 1b). Fig. 1c illustrates the application of SHAP at the global cohort level for two individuals wrongly diagnosed with CML compared to CML correctly predicted cohort. By using a decision plot, we can observe which features are the most important contributors to the model's prediction. Fig 1c shows that predictions for CML are primarily driven by the BCR-ABL1 features, as expected. In our independent test cohort the following entities reached a very high concordance, such as AML (16/21) AUL (11/12), BCP-ALL (10/10), CML (13/13), HZL (8/8), MGUS (7/7), Multiple Myeloma (9/11), PNH (10/10), T-ALL (6/7). Other clear cut entities with correct high level predictions include BPDCN, FL, LPL, PPBL, NK-cell, HCL-variant and HGBL. In other entities such as T-NHL results were more heterogeneous, but this was also expressed in the probability scores given by the model. The first choice had a probability score of ~50%, exposing the correct diagnosis as the second likeliest one with ~40%. Test cohort included cases with mixed diagnostic characteristics, e.g. MDS/MPN-RS-T (4/11 correct, 4 predicted as MDS, and 3 as MPN). Conclusion: We present an AI tool to interpret WGS and WTS data aiming to predict the final diagnosis without any human input and high concordance to today's WHO classification. Due to the high data dimensionality of WGS and WTS data, an impossible feat for a human. The tool is exposed via a web application and visualizations make the automated decisions transparent and verifiable through humans paving a way for better adoption of WGS and WTS into a clinical routine setting. Figure 1 Figure 1. Disclosures Kern: MLL Munich Leukemia Laboratory: Other: Part ownership. Haferlach: MLL Munich Leukemia Laboratory: Other: Part ownership. Haferlach: MLL Munich Leukemia Laboratory: Other: Part ownership.


Sensor Review ◽  
2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Cuicui Du ◽  
Deren Kong

Purpose Three-axis accelerometers play a vital role in monitoring the vibrations in aircraft machinery, especially in variable flight temperature environments. The sensitivity of a three-axis accelerometer under different temperature conditions needs to be calibrated before the flight test. Hence, the authors investigated the efficiency and sensitivity calibration of three-axis accelerometers under different conditions. This paper aims to propose the novel calibration algorithm for the three-axis accelerometers or the similar accelerometers. Design/methodology/approach The authors propose a hybrid genetic algorithm–particle swarm optimisation–back-propagation neural network (GA–PSO–BPNN) algorithm. This method has high global search ability, fast convergence speed and strong non-linear fitting capability; it follows the rules of natural selection and survival of the fittest. The authors describe the experimental setup for the calibration of the three-axis accelerometer using a three-comprehensive electrodynamic vibration test box, which provides different temperatures. Furthermore, to evaluate the performance of the hybrid GA–PSO–BPNN algorithm for sensitivity calibration, the authors performed a detailed comparative experimental analysis of the BPNN, GA–BPNN, PSO–BPNN and GA–PSO–BPNN algorithms under different temperatures (−55, 0 , 25 and 70 °C). Findings It has been showed that the prediction error of three-axis accelerometer under the hybrid GA–PSO–BPNN algorithm is the least (approximately ±0.1), which proved that the proposed GA–PSO–BPNN algorithm performed well on the sensitivity calibration of the three-axis accelerometer under different temperatures conditions. Originality/value The designed GA–PSO–BPNN algorithm with high global search ability, fast convergence speed and strong non-linear fitting capability has been proposed to decrease the sensitivity calibration error of three-axis accelerometer, and the hybrid algorithm could reach the global optimal solution rapidly and accurately.


Information ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 422
Author(s):  
Yanming Guo ◽  
Yan Bai ◽  
Shuaihe Gao ◽  
Zhibing Pan ◽  
Zibin Han ◽  
...  

An ultrahigh precise clock (space optical clock) will be installed onboard a low-orbit spacecraft (a usual expression for a low-orbit satellite operating on an orbit at an altitude of less than 1000 km) in the future, which will be expected to obtain better time-frequency performance in a microgravity environment, and provide the possible realization of ultrahigh precise long-range time synchronization. The advancement of the microwave two-way time synchronization method can offer an effective solution for developing time-frequency transfer technology. In this study, we focus on a method of precise satellite-ground two-way time synchronization and present their key aspects. For reducing the relativistic effects on two-way precise time synchronization, we propose a high-precision correction method. We show the results of tests using simulated data with fully realistic effects such as atmospheric delays, orbit errors, and earth gravity, and demonstrate the satisfactory performance of the methods. The accuracy of the relativistic error correction method is investigated in terms of the spacecraft attitude error, phase center calibration error (the residual error after calibrating phase center offset), and precise orbit determination (POD) error. The results show that the phase center calibration error and POD error contribute greatly to the residual of relativistic correction, at approximately 0.1~0.3 ps, and time synchronization accuracy better than 0.6 ps can be achieved with our proposed methods. In conclusion, the relativistic error correction method is effective, and the satellite-ground two-way precise time synchronization method yields more accurate results. The results of Beidou two-way time synchronization system can only achieve sub-ns accuracy, while the final accuracy obtained by the methods in this paper can improved to ps-level.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6717
Author(s):  
Yunfeng Ran ◽  
Qixin He ◽  
Qibo Feng ◽  
Jianying Cui

Line-structured light has been widely used in the field of railway measurement, owing to its high capability of anti-interference, fast scanning speed and high accuracy. Traditional calibration methods of line-structured light sensors have the disadvantages of long calibration time and complicated calibration process, which is not suitable for railway field application. In this paper, a fast calibration method based on a self-developed calibration device was proposed. Compared with traditional methods, the calibration process is simplified and the calibration time is greatly shortened. This method does not need to extract light strips; thus, the influence of ambient light on the measurement is reduced. In addition, the calibration error resulting from the misalignment was corrected by epipolar constraint, and the calibration accuracy was improved. Calibration experiments in laboratory and field tests were conducted to verify the effectiveness of this method, and the results showed that the proposed method can achieve a better calibration accuracy compared to a traditional calibration method based on Zhang’s method.


Sign in / Sign up

Export Citation Format

Share Document