error limit
Recently Published Documents


TOTAL DOCUMENTS

31
(FIVE YEARS 9)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Vol 108 (Supplement_6) ◽  
Author(s):  
L Le Blevec ◽  
K Daga ◽  
X Sara ◽  
A Singh ◽  
S Javed ◽  
...  

Abstract Aim Incomplete informed consent can lead to patient dissatisfaction and litigation.1 Time constraints, legibility, human error, limit completion of consent forms, putting surgeons and trusts at risk of litigation.2 The aim of this project was to assess legibility and completeness of handwritten consent forms, with the objective to improve legibility to 100% and risks listed to 100% of those endorsed by the British Orthopaedic Association (BOA).3 Method An initial baseline study in multiple hospitals across the UK identified 113 patients who underwent hemiarthroplasties. The consent forms were assessed for legibility and risks included, compared to those listed by the BOA. Pre-populated risks stickers were introduced in 1 district general hospital (DGH) and 2 cycles repeated again (62 patients identified). Results Overall, 35% of consent forms 1 were illegible; 100% of the time in the risks section. Mean number of risks missing was 2.34 and most frequently missed risk was ‘death’ (missing on 35.5% of consent forms). In the DGH that introduced stickers, consent forms were 100% legible and 100% compliant to the standards set by the BOA when the stickers were used. However, sticker use remained low; only used 20% of the time in the second cycle, marginally increased from the previous cycle (18%). Conclusions A high proportion of consent forms are not completed to BOA standards and are illegible. Pre-populated stickers could aid in achieving 100% legibility and 100% risk inclusion. The stickers will be implemented in other trusts and methods to increase compliance with sticker use will be trialled.


Langmuir ◽  
2021 ◽  
Author(s):  
Yiwei Jin ◽  
Jiankui Chen ◽  
Zhouping Yin ◽  
Yiqun Li ◽  
Mengmeng Huang

Author(s):  
Laura Michele Báez Villegas ◽  
Santiago Omar Caballero Morales

The Travelling Salesman Problem (TSP) is one of the main routing problems in the Logistics and Supply Chain Management fields. Given its computational complexity, metaheuristics are frequently needed to solve it to near-optimality. In this aspect, Genetic Algorithms (GA) are promising methods, however, their search performance depends of populations of solutions which can increase computational processing. Thus, the management of this component is subject to adaptations to reduce its computational burden and improve overall performance. This work explores on the elimination of repeated individuals within the population which may represent a significant fraction of its size and do not add valuable information to the solution search mechanisms of the GA. This cleaning process is expected to contribute to solution diversity. Experiments performed with different TSP test instances support the finding that this cleaning process can improve the convergence of the GA to very suitable solutions (within the 10% error limit). These findings were statistically validated.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Jakub Żmigrodzki ◽  
Szymon Cygan ◽  
Krzysztof Kałużyński

Abstract Background In majority of studies on speckle tracking echocardiography (STE) the strain estimates are averaged over large areas of the left ventricle. This may impair the diagnostic capability of the STE in the case of e.g. local changes of the cardiac contractility. This work attempts to evaluate, how far one can reduce the averaging area, without sacrificing the estimation accuracy that could be important from the clinical point of view. Methods Synthetic radio frequency (RF) data of a spheroidal left ventricular (LV) model were generated using FIELD II package and meshes obtained from finite element method (FEM) simulation. The apical two chamber (A2C) view and the mid parasternal short axis view (pSAXM) were simulated. The sector encompassed the entire cross-section (full view) of the LV model or its part (partial view). The wall segments obtained according to the American Heart Association (AHA17) were divided into subsegments of area decreasing down to 3 mm2. Longitudinal, circumferential and radial strain estimates, obtained using a hierarchical block-matching method, were averaged over these subsegments. Estimation accuracy was assessed using several error measures, making most use of the prediction of the maximal relative error of the strain estimate obtained using the FEM derived reference. Three limits of this predicted maximal error were studied, namely 16.7%, 33% and 66%. The smallest averaging area resulting in the strain estimation error below one of these limits was considered the smallest allowable averaging area (SAAA) of the strain estimation. Results In all AHA17 segments, using the A2C projection, the SAAA ensuring maximal longitudinal strain estimates error below 33% was below 3 mm2, except for the segment no 17 where it was above 278 mm2. The SAAA ensuring maximal circumferential strain estimates error below 33% depended on the AHA17 segment position within the imaging sector and view type and ranged from below 3–287 mm2. The SAAA ensuring maximal radial strain estimates error below 33% obtained in the pSAXM projection was not less than 287 mm2. The SAAA values obtained using other maximal error limits differ from SAAA values observed for the 33% error limit only in limited number of cases. SAAA decreased when using maximal error limit equal to 66% in these cases. The use of the partial view (narrow sector) resulted in a decrease of the SAAA. Conclusions The SAAA varies strongly between strain components. In a vast part of the LV model wall in the A2C view the longitudinal strain could be estimated using SAAA below 3 mm2, which is smaller than the averaging area currently used in clinic, thus with a higher resolution. The SAAA of the circumferential strain estimation strongly depends on the position of the region of interest and the parameters of the acquisition. The SAAA of the radial strain estimation takes the highest values. The use of a narrow sector could increase diagnostic capabilities of 2D STE.


2020 ◽  
Vol 58 (12) ◽  
pp. 2037-2045 ◽  
Author(s):  
Samy Mzougui ◽  
Julien Favresse ◽  
Reza Soleimani ◽  
Catherine Fillée ◽  
Damien Gruson

AbstractBackgroundBiotin is currently a matter of concern for laboratories using biotin-streptavidin-based immunoassays. Biotin interferences have been reported for high-sensitive troponin T (hsTnT) and thyroid-stimulating hormone (TSH) assays. We aimed to evaluate the new generation of hsTnT and TSH electrochemiluminescent immunoassays announced to be less sensitive to biotin.MethodsFirstly, we assessed the analytical performances of new generation assays (imprecision, bias, total error, limit of quantification) and compared previous and new generation assays in the absence of biotin. Secondly, we challenged both generations of assays with samples spiked with seven different biotin levels. The efficiency of new generation assays was also compared to the streptavidin beads treatment.ResultsNew generation assays presented suitable analytical performances. Previous and new generations of hsTnT and TSH assays were commutable in the absence of biotin. In the presence of biotin, we confirmed that previous generation assays were affected by biotin concentration as low as 40.5 ng/mL and that new generation assays were not affected up to the announced tolerance threshold of 1200 ng/mL. After the streptavidin beads treatment, we observed a higher imprecision for both parameters and a constant 10% negative bias for TSH compared to new generation assays.ConclusionsNew generation of electrochemiluminescent immunoassays appears as a reliable systematic solution to prevent biotin interference for hsTnT and TSH testing.


2020 ◽  
Vol 19 (5-6) ◽  
pp. 122-127
Author(s):  
Evgenia V. Kompantseva ◽  
Daria N. Lutsenko ◽  
Alexander A. Glushko

This paper presents the results of the selection and justification of the conditions for determination of a new biologically active compound (BAC) VMA-13-15, which is N-(2-[4-oxo-3(4H)-quinazolinyl]propionyl)guanidine bymeans of the spectrophotometry. By using the calculated values of BAC ionization constants, it was proposed to use purified water as a solvent. The maximum light absorption at 266 nm was chosen as the analytical wavelength. This was justified by the fact that in aqueous solutions 99% BAC are in the molecular form, which allows to determine it with the error limit of 1.29%, and with the least amount of dilutions. The study showed that the proposed method is specific and linear in the analytical concentration of 0.0010.004%, as well as it is precise and correct, which confirms the possibility of its use for quantitative determination.


2020 ◽  
Vol 11 (2) ◽  
pp. 37
Author(s):  
Khanifudin Khanifudin ◽  
Jajang Jajang ◽  
Bambang Hendriya Guswanto

The accuracy of survey results depends on sample size. So far, in determining the sample size of the stratified random sampling method, researchers still use manual calculations. Because of that, this research aims to create a sample size determination program of  the stratified random sampling method using C++ and PHP programming languages. This research begins with the study of literacy, creating flowcharts and pseudocode algorithms, writing program syntax, and implementing data on the number of civil servants in each UPK in Banyumas Regency. By entering a 95% confidence level, the error limit that can be tolerated is 5, and the strata cost of each is 1, the minimum sample size is 60 with the 1st and 2nd strata sample sizes is 45 and 15. The program is expected to help researchers to determine sample size more easily. So far, the program also minimizes errors in calculations because a warning will appear when an error occurs.


The objective of linearization of a nonlinear system is to ensure smooth control of the linearized system through well-proven linear control methods. However, residual nonlinearities may still be present in a system after linearization either by design or due to mismatch between the system model and the actual plant. If the residual nonlinearities are not very significant, one can attempt to remove these by tuning the linearizing transformation by comparing the system to a linear canonical form. In this paper, we show how quadratic linearizing transformations of a three-phase horizontal gravity separator (TPS) model derived in an earlier paper by the authors can be tuned as in a neural network using error back-propagation by comparing it to a canonical linear model thus removing the nonlinearities within the tuning error limit.


Nativa ◽  
2019 ◽  
Vol 7 (3) ◽  
pp. 312
Author(s):  
Tatiana Da Cunha Castro ◽  
Ademir Roberto Ruschel ◽  
João Olegário Pereira de Carvalho ◽  
Edson Marcos Leal Soares Ramos ◽  
Jaqueline Macedo Gomes

O objetivo do estudo foi avaliar a eficiência de parcelas na estimativa da densidade e área basal de uma floresta manejada na Amazônia. A densidade e área basal foram calculadas utilizando dados de um censo florestal realizado em 144 ha e dos dados de medição realizada em 48 parcelas permanentes de 0,25 ha cada (amostra de 12 ha), instaladas nos 144 ha segundo um processo aleatório simples. Para testar a eficiência amostral na estimativa dessas variáveis foram calculados o erro real relativo, o erro de amostragem e a intensidade amostral para diferentes níveis de inclusão de diâmetro mínimo das árvores no inventário. O erro real foi calculado para seis níveis de inclusão de DAP, considerando árvores com diâmetro a partir de 25 cm, o erro de amostragem foi calculado para 10 níveis de inclusão de DAP, considerando árvores com diâmetro a partir de 5 cm; e o cálculo da intensidade amostral foi realizado para populações finitas. O limite de erro amostral admissível foi de 10%. A amostragem utilizada na área foi suficiente para gerar resultados com alto nível de precisão, podendo ser aplicada em florestas densas com densidade e área basal semelhantes às da floresta estudada.Palavras-chave: censo florestal, unidades amostrais, erro amostral, eficiência amostral. REPRESENTATIVENESS AND PRECISION IN THE ESTIMATION OF DENSITY AND BASAL AREA IN THE TAPAJÓS NATIONAL FOREST ABSTRACT:The objective of the study was to evaluate the efficiency of plots in the estimation of the density and basal area of a managed area in Amazon. The density and basal area were calculated from data of a forest census carried out in 144 ha and data from a measurement carried out in 48 permanent plots of 0.25 ha each (12 ha sample), installed in the same144 ha according to a simple random process. In order to test the sampling efficiency in the estimation of these variables, we calculated the actual relative error, the sampling error and the sample intensity for different inclusion levels of minimum tree diameter in the inventory. The actual error was calculated for six inclusion levels, considering trees with diameter from 25 cm, the sampling error was calculated for 10 inclusion levels, considering trees with diameter from 5 cm, and the calculation of the sampling intensity was performed for finite populations. The admissible sampling error limit was 10%. The sampling used in the area was sufficient to generate results with a high level of precision, being able to be applied in dense forests with density and basal area similar to those of the studied forest.Keywords: forest census, sampling units, sampling error, sample efficiency.


In many localization applications, the exact locations of randomly distributed sensor nodes are essentially required with a precise accuracy in a wireless sensor network. Different placement scenarios of master nodes play a major role in a sensing area for effective network wide localization. In this paper, Time Difference of Arrival with Dual Velocity (TDOA-DV) localization technique is employed along with the proposed Broadcast Sub-anchoring Packet (BSP) technique for the effective localization of several sensor nodes over a network. This new technique investigates the effect of different placements of three master nodes over a network wide. From the simulation results, it can be observed that localization error limit of 0.002m can be achieved over a network wide with the placement of anchor nodes.


Sign in / Sign up

Export Citation Format

Share Document