counting error
Recently Published Documents


TOTAL DOCUMENTS

80
(FIVE YEARS 17)

H-INDEX

16
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Francesco Muschitiello

Abstract. This study presents the first continuously measured transfer functions that quantify the age difference between the Greenland Ice-Core Chronology 2005 (GICC05) and the Hulu Cave U-Th timescale during the last glacial period. The transfer functions were estimated using an automated algorithm for Bayesian inversion that allows inferring a continuous and objective synchronization between Greenland ice-core and Hulu Cave proxy signals. The algorithm explicitly considers prior knowledge on the maximum counting error (MCE) of GICC05, but also samples synchronization scenarios that exceed the differential dating uncertainty of the annual-layer count in ice cores, which are currently not detectable using conventional tie-point alignments or wiggle-matching techniques. The consistency and accuracy of the results were ensured by estimating two independent synchronizations: a climate synchronization based on climate proxy records, and a climate-independent synchronization based on cosmogenic radionuclide data (i.e. 10Be and 14C). The transfer functions are up to 40 % more precise than previous estimates and significantly reduce the absolute dating uncertainty of the GICC05 back to 48 kyr ago. The results highlight that the annual-layer counting error of GICC05 is not strictly correlated over extended periods of time, and that within certain Greenland Stadials the differential dating uncertainty is likely underestimated by 7.5–20 %. Importantly, the analysis implies for the first time that during the Last Glacial Maximum GICC05 overcounts ice layers by 15–25 % –a bias attributable to a higher frequency of sub-annual layers due to changes in the seasonal cycle of precipitation and mode of dust deposition to the Greenland Ice Sheet. The new timescale transfer functions provide important constraints on the uncertainty surrounding the stratigraphic dating of the Greenland age-scale and enable an improved chronological integration of ice cores, U-Th-dated and radiocarbon-dated paleoclimate records on a common timeline. The transfer functions are available as supplements to this study.


Author(s):  
Eun Ji Jeong ◽  
Donghyuk Choi ◽  
Dong Woo Lee

Conventional cell-counting software uses contour or watershed segmentations and focuses on identifying two-dimensional (2D) cells attached on the bottom of plastic plates. Recently developed software has been useful tools for the quality control of 2D cell-based assays by measuring initial seed cell numbers. These algorithms do not, however, quantitatively test in three-dimensional (3D) cell-based assays using extracellular matrix (ECM), because cells are aggregated and overlapped in the 3D structure of the ECM such as Matrigel, collagen, and alginate. Such overlapped and aggregated cells make it difficult to segment cells and to count the number of cells accurately. It is important, however, to determine the number of cells to standardize experiments and ensure the reproducibility of 3D cell-based assays. In this study, we apply a 3D cell-counting method using U-net deep learning to high-density aggregated cells in ECM to identify initial seed cell numbers. The proposed method showed a 10% counting error in high-density aggregated cells, while the contour and watershed segmentations showed 30% and 40% counting errors, respectively. Thus, the proposed method can reduce the seed cell-counting error in 3D cell-based assays by providing the exact number of cells to researchers, thereby enabling the acquisition of quality control in 3D cell-based assays.


Author(s):  
Sina Buck ◽  
Collin Krauss ◽  
Delia Waldenmaier ◽  
Christina Liebing ◽  
Nina Jendrike ◽  
...  

Abstract Aim Correct estimation of meal carbohydrate content is a prerequisite for successful intensified insulin therapy in patients with diabetes. In this survey, the counting error in adult patients with type 1 diabetes was investigated. Methods Seventy-four patients with type 1 diabetes estimated the carbohydrate content of 24 standardized test meals. The test meals were categorized into 1 of 3 groups with different carbohydrate content: low, medium, and high. Estimation results were compared with the meals’ actual carbohydrate content as determined by calculation based on weighing. A subgroup of the participants estimated the test meals for a second (n=35) and a third time (n=22) with a mean period of 11 months between the estimations. Results During the first estimation, the carbohydrate content was underestimated by −28% (−50, 0) of the actual carbohydrate content. Particularly meals with high mean carbohydrate content were underestimated by −34% (−56, −13). Median counting error improved significantly when estimations were performed for a second time (p<0.001). Conclusions Participants generally underestimated the carbohydrate content of the test meals, especially in meals with higher carbohydrate content. Repetition of estimation resulted in significant improvements in estimation accuracy and is important for the maintenance of correct carbohydrate estimations. The ability to estimate the carbohydrate content of a meal should be checked and trained regularly in patients with diabetes.


2021 ◽  
pp. 193229682110123
Author(s):  
Chiara Roversi ◽  
Martina Vettoretti ◽  
Simone Del Favero ◽  
Andrea Facchinetti ◽  
Pratik Choudhary ◽  
...  

Background: In the management of type 1 diabetes (T1D), systematic and random errors in carb-counting can have an adverse effect on glycemic control. In this study, we performed an in silico trial aiming at quantifying the impact of different levels of carb-counting error on glycemic control. Methods: The T1D patient decision simulator was used to simulate 7-day glycemic profiles of 100 adults using open-loop therapy. The simulation was repeated for different values of systematic and random carb-counting errors, generated with Gaussian distribution varying the error mean from -10% to +10% and standard deviation (SD) from 0% to 50%. The effect of the error was evaluated by computing the difference of time inside (∆TIR), above (∆TAR) and below (∆TBR) the target glycemic range (70-180mg/dl) compared to the reference case, that is, absence of error. Finally, 3 linear regression models were developed to mathematically describe how error mean and SD variations result in ∆TIR, ∆TAR, and ∆TBR changes. Results: Random errors globally deteriorate the glycemic control; systematic underestimations lead to, on average, up to 5.2% more TAR than the reference case, while systematic overestimation results in up to 0.8% more TBR. The different time in range metrics were linearly related with error mean and SD ( R2>0.95), with slopes of [Formula: see text], [Formula: see text] for ∆TIR, [Formula: see text], [Formula: see text] for ∆TAR, and [Formula: see text], [Formula: see text] for ∆TBR. Conclusions: The quantification of carb-counting error impact performed in this work may be useful understanding causes of glycemic variability and the impact of possible therapy adjustments or behavior changes in different glucose metrics.


2021 ◽  
Author(s):  
Celia Martin-Puertas ◽  
Amy A. Walsh ◽  
Simon P.E Blockley ◽  
Poppy Harding ◽  
George E. Biddulph ◽  
...  

&lt;p&gt;This paper reports the first Holocene varved chronology for the lacustrine sediment record of Diss Mere in the UK. The record of Diss Mere is 15 m long, and shows 4.2 m of finely-laminated sediments, which are present between ca. 9 and 13 m of core depth. The microfacies analysis identified three major seasonal patterns of deposition, which corroborate the annual nature of sedimentation throughout the whole interval. The sediments are diatomaceous organic and carbonate varves with an average thickness of 0.45 mm. A total of 8473 varves were counted with maximum counting error of up to &amp;#160;40 varves by the bottom of the varved sequence. To tie the resulting floating varve chronology to the IntCal 2020 radiocarbon timescale, we used a Bayesian Deposition model (P_Sequencewith outlier detection) on all available chronological data from the core. The data included five radiocarbon dates, two known tephra layers (Glen Garry and OMH-185) with calendar ages based on Bayesian modelling of sequences of radiocarbon ages, and the relative varve counts between dated points. The resulting age-depth model (DISSV-2020) dates the varved sequence between ca. 2100 and 10,300 cal BP and age&amp;#160;uncertainties are decadal in scale (95% confidence).&amp;#160;&lt;/p&gt;


Időjárás ◽  
2021 ◽  
Vol 125 (3) ◽  
pp. 513-519
Author(s):  
Tibor Rácz

Historical rainfall data registered by siphoned rainfall recorder (SRW) devices have been widely used for a long time in rainfall intensity investigations. A relatively known counting error of the SRW devices is the siphoning error, when the registration of rainfall is blocked temporarily, during the drainage of measure tank. This issue causes a systematic underestimation in the rainfall and rainfall intensity measurement results. To reduce its consequences, a data correction is crucial when SRW data are used, for example as a reference for climate comparison studies, or for proceeding of intensity-duration-frequency curves, etc. In this paper, a formula is presented to fix the siphonage error of SRW devices for historical rainfall data. The early measures were processed in a significant percentage of cases, and sometimes the original measurement results (registration ribbon) have been lost. An essential advantage of the presented formula is that it can be applied for these processed data, which show only the intensity of a known length time interval. For this correction, the average rainfall intensity and the length of the time window are needed, over the physical parameters of the SRW device. The data correction can provide a fixed value of the rainfall intensity, which is undoubtedly closer to the real average rainfall intensity. The importance of this formula is in the reprocessing and validation of the historical rainfall intensity data, measured by siphoned rainfall recorders.


2020 ◽  
pp. paper26-1-paper26-12
Author(s):  
Denis Kuplyakov ◽  
Yaroslav Geraskin ◽  
Timur Mamedov ◽  
Anton Konushin

We consider the problem of people counting in video surveillance. This is one of the most popular tasks in video analysis because this data can be used for predictive analytics and improvement of customer services, traffic control, etc. Our method is based on the object tracking in video with low framerate. We use the algorithm from [1] as a baseline and propose several modifications that improve the quality of people counting. One of the main modifications is to use a head detector instead of a body detector in the tracking pipeline. Head tracking is proved to be more robust and accurate as the heads are less susceptible to occlusions. To find the intersection of a person with a signal line, we either raise the signal lines to the level of the heads or perform a regression of bodies based on the available head detections. Our experimental evaluation has demonstrated that the modified algorithm surpasses the original in both ac- curacy and computational efficiency, showing a lower counting error on a lower detection frequency.


2020 ◽  
Author(s):  
Saki Ishino ◽  
Takuya Itaki

Abstract The Eucampia Index, which is calculated from valve ratio of Antarctic diatom Eucampia ainarctica varieties, has been expected to be a useful indicator of sea ice coverage or/and sea surface temperature variation in the Southern Ocean. To verify the relationship between the index value and the environmental factors, considerable effort is needed to classify and count valves of E. antarctica in a very large number of samples. In this study, to realize automated detection of the Eucampia Index, we constructed a deep-learning (one of the learning methods of artificial intelligence) based models for identifying Eucampia valves from various particles in a diatom slide. The microfossil Classification and Rapid Accumulation Device (miCRAD) system, which can be used for scanning a slide and cropping images of particles automatically, was employed to collect images in training dataset for the model and test dataset for model verification. As a result of classifying particle images in the test dataset by the initial model "Eant_1000px_200616", accuracy was 78.8%. The Eucampia Index value prepared in the test dataset was 0.80, and the value predicted using the developed model from the same dataset was 0.76. The predicted value was in the range of the manual counting error. These results suggest that the classification performance of the model is similar to that of a human expert. This study revealed that a model capable of detecting the ratio of two diatom species can be constructed using the miCRAD system for the first time. The miCRAD system connected with the developed model in this study is capable of automatically classifying particle images at the same time of capturing images so that the system can be applied to a large-scale analysis of the Eucampia index in the Southern Ocean. Depending on the setting of the classification category, similar method is relevant to investigators who have to process a large number of diatom samples such as for detecting specific species for biostratigraphic and paleoenvironmental studies.


2020 ◽  
Vol 22 (10) ◽  
pp. 749-759 ◽  
Author(s):  
Chiara Roversi ◽  
Martina Vettoretti ◽  
Simone Del Favero ◽  
Andrea Facchinetti ◽  
Giovanni Sparacino

Sign in / Sign up

Export Citation Format

Share Document