consistent error
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 5)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Vol 29 (2) ◽  
Author(s):  
Abdul Hamid Mar Iman ◽  
Nor Hizami Hassin ◽  
Muhamad Azahar Abas ◽  
Zulhazman Hamzah

Studies on the statistical approach to analyzing growth factors of bud’s growth in the genus Rafflesia have been lacking. This study quantified the effects of eight selected ecological factors hypothesized to be influencing bud’s growth (diameter and circumference) of Rafflesia kerrii Meijer. A non-experimental cross-sectional data collection was conducted between April and August 2018 by in-situ observation and measurements on eight ecological factors utilizing thirty-four sampled individual plants in Lojing Highlands, Kelantan, Peninsular Malaysia. The Ordinary Least Squares (OLS) and Heteroscedasticity-Consistent- Error (HCE) OLS regression models were employed to establish the statistical relationship between bud’s growth and its influencing factors. Host plant’s ecological ability, level of temperature, light shading, soil acidity, and interaction between plant survival condition and growth stage were found to be the significant and influential ecological factors to bud’s growth of Rafflesia kerrii. The results also showed that, model wise, HCE OLS models outperformed the OLS models in explaining the cause-and-effect relationship under study. Due to some limitations in sampling and data collection, further studies were recommended to corroborate this study using a larger sample covering a larger geographic area – possibly across different localities.


2019 ◽  
Author(s):  
Phillip Harder ◽  
John W. Pomeroy ◽  
Warren D. Helgason

Abstract. Vegetation has a tremendous influence on snow processes and snowpack dynamics yet remote sensing techniques to resolve the spatial variability of sub-canopy snow depth are lacking. Unmanned Aerial Vehicles (UAV) have had recent widespread application to capture high resolution information on snow processes and are herein applied to the sub-canopy snow depth challenge. Previous demonstrations of snow depth mapping with UAV Structure from Motion (SfM) and airborne-lidar have focussed on non-vegetated surfaces or reported large errors in the presence of vegetation. In contrast, UAV-lidar systems have high-density point clouds, measure returns from a wide range of scan angles, and so have a greater likelihood of successfully sensing the sub-canopy snow depth. The effectiveness of UAV-lidar and UAV-SfM in mapping snow depth in both open and forested terrain was tested in a 2019 field campaign in the Canadian Rockies Hydrological Observatory, Alberta and at Canadian Prairie sites near Saskatoon, Saskatchewan, Canada. Only UAV-lidar could successfully measure the sub-canopy snow surface with reliable sub-canopy point coverage, and consistent error metrics (RMSE


Author(s):  
J. Wohlfeil ◽  
D. Grießbach ◽  
I. Ernst ◽  
D. Baumbach ◽  
D. Dahlke

<p><strong>Abstract.</strong> Geometric camera calibration is a mandatory prerequisite for many applications in computer vision and photogrammetry. Especially when requiring an accurate camera model the effort for calibration can increase dramatically. For the calibration of the stereo-camera used for optical navigation a new chessboard based approach is presented. It is derived from different parts of existing approaches which, taken separately, are not able to meet the requirements. Moreover, the approach adds one novel main feature: It is able to detect all visible chessboard fields with the help of one or more fiducial markers simply sticked on a chessboard (AprilTags). This allows a robust detection of one or more chessboards in a scene, even from extreme perspectives. Except for the acquisition of the calibration images the presented approach enables a fully automatic calibration. Together with the parameters of the interior and relative orientation the full covariance matrix of all model parameters is calculated and provided, allowing a consistent error propagation in the whole processing chain of the imaging system. Even though the main use case for the approach is a stereo camera system it can be used for a multi-camera system with any number of cameras mounted on a rigid frame.</p>


2019 ◽  
Vol 14 (5) ◽  
Author(s):  
Andreas Enzenhöfer ◽  
Albert Peiret ◽  
Marek Teichmann ◽  
József Kövecses

Modeling multibody systems subject to unilateral contacts and friction efficiently is challenging, and dynamic formulations based on the mixed linear complementarity problem (MLCP) are commonly used for this purpose. The accuracy of the MLCP solution method can be evaluated by determining the error introduced by it. In this paper, we find that commonly used MLCP error measures suffer from unit inconsistency leading to the error lacking any physical meaning. We propose a unit-consistent error measure, which computes energy error components for each constraint dependent on the inverse effective mass and compliance. It is shown by means of a simple example that the unit consistency issue does not occur using this proposed error measure. Simulation results confirm that the error decreases with convergence toward the solution. If a pivoting algorithm does not find a solution of the MLCP due to an iteration limit, e.g., in real-time simulations, choosing the result with the least error can reduce the risk of simulation instabilities and deviation from the reference trajectory.


2019 ◽  
Vol 23 ◽  
pp. 233121651882220 ◽  
Author(s):  
Marina Salorio-Corbetto ◽  
Thomas Baer ◽  
Brian C. J. Moore

The objective was to determine the effects of two frequency-lowering algorithms (frequency transposition, FT, and frequency compression, FC) on audibility, speech identification, and subjective benefit, for people with high-frequency hearing loss and extensive dead regions (DRs) in the cochlea. A single-blind randomized crossover design was used. FT and FC were compared with each other and with a control condition (denoted ‘Control’) without frequency lowering, using hearing aids that were otherwise identical. Data were collected after at least 6 weeks of experience with a condition. Outcome measures were audibility, scores for consonant identification, scores for word-final /s, z/ detection ( S test), sentence-in-noise intelligibility, and a questionnaire assessing self-perceived benefit (Spatial and Qualities of Hearing Scale). Ten adults with steeply sloping high-frequency hearing loss and extensive DRs were tested. FT and FC improved the audibility of some high-frequency sounds for 7 and 9 participants out of 10, respectively. At the group level, performance for FT and FC did not differ significantly from that for Control for any of the outcome measures. However, the pattern of consonant confusions varied across conditions. Bayesian analysis of the confusion matrices revealed a trend for FT to lead to more consistent error patterns than FC and Control. Thus, FT may have the potential to give greater benefit than Control or FC following extended experience or training.


2018 ◽  
Vol 197 ◽  
pp. 389-399 ◽  
Author(s):  
Wenwen Xie ◽  
Zhen Lu ◽  
Zhuyin Ren ◽  
Graham M. Goldin

Author(s):  
Andreas Enzenhöfer ◽  
Albert Peiret ◽  
Marek Teichmann ◽  
József Kövecses

Modeling multibody systems subject to unilateral contacts and friction efficiently is challenging, and dynamic formulations based on the mixed linear complementarity problem (MLCP) are commonly used for this purpose. The accuracy of the MLCP solution method can be evaluated by determining the error introduced by it. In this paper, we find that commonly used MLCP error measures suffer from unit inconsistency leading to the error lacking any physical meaning. We propose a unit-consistent error measure which computes energy error components for each constraint dependent on the inverse effective mass and compliance. It is shown by means of a simple example that the unit consistency issue does not occur using this proposed error measure. Simulation results confirm that the error decreases with convergence toward the solution. If a pivoting algorithm does not find a solution of the MLCP due to an iteration limit, e.g. in real-time simulations, choosing the result with the least error can reduce the risk of simulation instabilities.


BMJ Open ◽  
2017 ◽  
Vol 7 (12) ◽  
pp. e015912 ◽  
Author(s):  
Nancy Hedlund ◽  
Idal Beer ◽  
Torsten Hoppe-Tichy ◽  
Patricia Trbovich

ObjectiveTo examine published evidence on intravenous admixture preparation errors (IAPEs) in healthcare settings.MethodsSearches were conducted in three electronic databases (January 2005 to April 2017). Publications reporting rates of IAPEs and error types were reviewed and categorised into the following groups: component errors, dose/calculation errors, aseptic technique errors and composite errors. The methodological rigour of each study was assessed using the Hawker method.ResultsOf the 34 articles that met inclusion criteria, 28 reported the site of IAPEs: central pharmacies (n=8), nursing wards (n=14), both settings (n=4) and other sites (n=3). Using the Hawker criteria, 14% of the articles were of good quality, 74% were of fair quality and 12% were of poor quality. Error types and reported rates varied substantially, including wrong drug (~0% to 4.7%), wrong diluent solution (0% to 49.0%), wrong label (0% to 99.0%), wrong dose (0% to 32.6%), wrong concentration (0.3% to 88.6%), wrong diluent volume (0.06% to 49.0%) and inadequate aseptic technique (0% to 92.7%)%). Four studies directly compared incidence by preparation site and/or method, finding error incidence to be lower for doses prepared within a central pharmacy versus the nursing ward and lower for automated preparation versus manual preparation. Although eight studies (24%) reported ≥1 errors with the potential to cause patient harm, no study directly linked IAPE occurrences to specific adverse patient outcomes.ConclusionsThe available data suggest a need to continue to optimise the intravenous preparation process, focus on improving preparation workflow, design and implement preventive strategies, train staff on optimal admixture protocols and implement standardisation. Future research should focus on the development of consistent error subtype definitions, standardised reporting methodology and reliable, reproducible methods to track and link risk factors with the burden of harm associated with these errors.


Author(s):  
Hiroshi Kajino

Dynamic Boltzmann machines (DyBMs) are recently developed generative models of a time series. They are designed to learn a time series by efficient online learning algorithms, whilst taking long-term dependencies into account with help of eligibility traces, recursively updatable memory units storing descriptive statistics of all the past data. The current DyBMs assume a finite-dimensional time series and cannot be applied to a functional time series, in which the dimension goes to infinity (e.g., spatiotemporal data on a continuous space). In this paper, we present a functional dynamic Boltzmann machine (F-DyBM) as a generative model of a functional time series. A technical challenge is to devise an online learning algorithm with which F-DyBM, consisting of functions and integrals, can learn a functional time series using only finite observations of it. We rise to the above challenge by combining a kernel-based function approximation method along with a statistical interpolation method and finally derive closed-form update rules. We design numerical experiments to empirically confirm the effectiveness of our solutions. The experimental results demonstrate consistent error reductions as compared to baseline methods, from which we conclude the effectiveness of F-DyBM for functional time series prediction.


2017 ◽  
Vol 26 (2S) ◽  
pp. 611-630 ◽  
Author(s):  
Lauren Bislick ◽  
Malcolm McNeil ◽  
Kristie A. Spencer ◽  
Kathryn Yorkston ◽  
Diane L. Kendall

Purpose The primary characteristics used to define acquired apraxia of speech (AOS) have evolved to better reflect a disorder of motor planning/programming. However, there is debate regarding the feature of relatively consistent error location and type. Method Ten individuals with acquired AOS and aphasia and 11 individuals with aphasia without AOS participated in this study. In the context of a 2-group experimental design, error consistency was examined via 5 repetitions of 30 multisyllabic words. The influence of error rate, severity of impairment, and stimulus presentation condition (blocked vs. random) on error consistency was also explored, as well as between-groups differences in the types of errors produced. Results Groups performed similarly on consistency of error location; however, adults with AOS demonstrated greater variability of error type in a blocked presentation condition only. Stimulus presentation condition, error rate, and severity of impairment did not influence error consistency in either group. Groups differed in the production of phonetic errors (e.g., sound distortions) but not phonemic errors. Conclusions Overall, findings do not support relatively consistent errors as a differentiating characteristic of AOS.


Sign in / Sign up

Export Citation Format

Share Document