calibration dataset
Recently Published Documents


TOTAL DOCUMENTS

34
(FIVE YEARS 3)

H-INDEX

10
(FIVE YEARS 0)

Author(s):  
Francesco Panzera ◽  
Paolo Bergamo ◽  
Donat Fäh

ABSTRACT The national seismic networks of Switzerland comprise more than 200 stations. At the station sites, the empirical amplification functions (EAFs) are routinely computed after each earthquake using a generalized inversion method based on separation of source, path, and site effects. The seismic stations are also characterized through geophysical measurements aiming to estimate shear-wave velocity profiles and horizontal-to-vertical spectral ratio of ambient vibrations (HVNR). Using this information, the correlation between the HVNR and EAF is assessed through canonical correlation. Once established, the canonical correlation is used to reconstruct the expected EAFpred at each considered station site in the dataset. The prediction is individually made for all seismic stations in the dataset, excluding every time the investigated station is from the calibration dataset; the reconstruction of the EAFpred is performed resorting to two parallel methods. The first method uses a combination of the canonical correlation parameters and Moran index, and the second one solves in a least-squares sense an overdetermined linear equation system including the canonical couples deemed as reliable. After a first round of predictions, a systematic lower EAFpred in soft sediment sites and a higher EAFpred in hard-rock sites is observed. A possible explanation for this behavior is found in the “normalization” to the Swiss standard rock profile in the computation of the EAF at the Swiss stations. Therefore, to reduce this effect, geological and geophysical parameters are considered in addition to the HVNR in the canonical correlation. We observe that the final solution improves when the least-squares solution approach is used with a combination of HVNR, VS30, and thickness of the ice cover at the last glacial maximum. Moreover, a blind test is performed using data not considered in the calibration dataset. The results highlight the ability of the method to provide an estimate of the site amplification over chosen frequency bins.



2021 ◽  
Vol 3 (2) ◽  
Author(s):  
Avery Cashion ◽  
Grzegorz Cieslewski ◽  
Adam Foris ◽  
Jiann Su ◽  
David Schwellenbach ◽  
...  

A muon tracker was developed using three polyvinyl toluene scintillator panels instrumented with photomultiplier tubes (PMTs) mounted at the corners. Panels are mounted in parallel on an aluminum frame which allows for simple adjustment of angle, orientation and separation distance between the panels. The responses of all PMTs in the system are digitized simultaneously at sub-nanosecond sample spacing. Software was developed to adjust settings and implement event rejection based on the number of panels that detected a scintillation event within a 400-nanosecond record.  The relative responses of the PMTs are used to calculate the position of scintillation events within each panel. The direction of the muons through the system can be tracked using the panel strike order. Methods for triangulation by both time-of-flight (TOF) and PMT magnitude response are reported. The time triangulation method is derived and experimentally demonstrated using parallel cables of differing length. The PMTs used in this experiment are only optimized for amplitude discrimination, not for time spread jitter as would be required to implement TOF methods into the scintillator panels. A Gaussian process regression machine learning tool was implemented to learn the relationship between PMT response features and positions from a calibration dataset. Resolution is analyzed using different numbers of PMTs and low-versus-high PMT sensitivities.  Muons traveling in forward and reverse directions through the detector system were counted in all six axis orientations. The muon detector was deployed for 28 days in an underground tunnel and vertical muon counts were recorded.



2021 ◽  
pp. 1-12
Author(s):  
Felix Fischer ◽  
Brooke Levis ◽  
Carl Falk ◽  
Ying Sun ◽  
John P. A. Ioannidis ◽  
...  

Abstract Background Previous research on the depression scale of the Patient Health Questionnaire (PHQ-9) has found that different latent factor models have maximized empirical measures of goodness-of-fit. The clinical relevance of these differences is unclear. We aimed to investigate whether depression screening accuracy may be improved by employing latent factor model-based scoring rather than sum scores. Methods We used an individual participant data meta-analysis (IPDMA) database compiled to assess the screening accuracy of the PHQ-9. We included studies that used the Structured Clinical Interview for DSM (SCID) as a reference standard and split those into calibration and validation datasets. In the calibration dataset, we estimated unidimensional, two-dimensional (separating cognitive/affective and somatic symptoms of depression), and bi-factor models, and the respective cut-offs to maximize combined sensitivity and specificity. In the validation dataset, we assessed the differences in (combined) sensitivity and specificity between the latent variable approaches and the optimal sum score (⩾10), using bootstrapping to estimate 95% confidence intervals for the differences. Results The calibration dataset included 24 studies (4378 participants, 652 major depression cases); the validation dataset 17 studies (4252 participants, 568 cases). In the validation dataset, optimal cut-offs of the unidimensional, two-dimensional, and bi-factor models had higher sensitivity (by 0.036, 0.050, 0.049 points, respectively) but lower specificity (0.017, 0.026, 0.019, respectively) compared to the sum score cut-off of ⩾10. Conclusions In a comprehensive dataset of diagnostic studies, scoring using complex latent variable models do not improve screening accuracy of the PHQ-9 meaningfully as compared to the simple sum score approach.



2020 ◽  
Vol 16 (6) ◽  
pp. 2599-2617
Author(s):  
Tom Dunkley Jones ◽  
Yvette L. Eley ◽  
William Thomson ◽  
Sarah E. Greene ◽  
Ilya Mandel ◽  
...  

Abstract. In the modern oceans, the relative abundances of glycerol dialkyl glycerol tetraether (GDGT) compounds produced by marine archaeal communities show a significant dependence on the local sea surface temperature at the site of deposition. When preserved in ancient marine sediments, the measured abundances of these fossil lipid biomarkers thus have the potential to provide a geological record of long-term variability in planetary surface temperatures. Several empirical calibrations have been made between observed GDGT relative abundances in late Holocene core-top sediments and modern upper ocean temperatures. These calibrations form the basis of the widely used TEX86 palaeothermometer. There are, however, two outstanding problems with this approach: first the appropriate assignment of uncertainty to estimates of ancient sea surface temperatures based on the relationship of the ancient GDGT assemblage to the modern calibration dataset, and second, the problem of making temperature estimates beyond the range of the modern empirical calibrations (> 30 ∘C). Here we apply modern machine learning tools, including Gaussian process emulators and forward modelling, to develop a new mathematical approach we call OPTiMAL (Optimised Palaeothermometry from Tetraethers via MAchine Learning) to improve temperature estimation and the representation of uncertainty based on the relationship between ancient GDGT assemblage data and the structure of the modern calibration dataset. We reduce the root mean square uncertainty on temperature predictions (validated using the modern dataset) from ∼ ±6 ∘C using TEX86-based estimators to ±3.6 ∘C using Gaussian process estimators for temperatures below 30 ∘C. We also provide a new quantitative measure of the distance between an ancient GDGT assemblage and the nearest neighbour within the modern calibration dataset, as a test for significant non-analogue behaviour.



2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Daniel E. Maidana ◽  
Shoji Notomi ◽  
Takashi Ueta ◽  
Tianna Zhou ◽  
Danica Joseph ◽  
...  

Abstract To develop an automated retina layer thickness measurement tool for the ImageJ platform, to quantitate nuclear layers following the retina contour. We developed the ThicknessTool (TT), an automated thickness measurement plugin for the ImageJ platform. To calibrate TT, we created a calibration dataset of mock binary skeletonized mask images with increasing thickness masks and different rotations. Following, we created a training dataset and performed an agreement analysis of thickness measurements between TT and two masked manual observers. Finally, we tested the performance of TT measurements in a validation dataset of retinal detachment images. In the calibration dataset, there were no differences in layer thickness between measured and known thickness masks, with an overall coefficient of variation of 0.00%. Training dataset measurements of immunofluorescence retina nuclear layers disclosed no significant differences between TT and any observer’s average outer nuclear layer (ONL) (p = 0.998), inner nuclear layer (INL) (p = 0.807), and ONL/INL ratio (p = 0.944) measurements. Agreement analysis showed that bias between TT vs. observers’ mean was lower than between any observers’ mean against each other in the ONL (0.77 ± 0.34 µm vs 3.25 ± 0.33 µm) and INL (1.59 ± 0.28 µm vs 2.82 ± 0.36 µm). Validation dataset showed that TT can detect significant and true ONL thinning (p = 0.006), more sensitive than manual measurement capabilities (p = 0.069). ThicknessTool can measure retina nuclear layers thickness in a fast, accurate, and precise manner with multi-platform capabilities. In addition, the TT can be customized to user preferences and is freely available to download.



Energies ◽  
2020 ◽  
Vol 13 (20) ◽  
pp. 5452
Author(s):  
Rodrigo A. de Marcos ◽  
Derek W. Bunn ◽  
Antonio Bello ◽  
Javier Reneses

This paper develops a new approach to short-term electricity forecasting by focusing upon the dynamic specification of an appropriate calibration dataset prior to model specification. It challenges the conventional forecasting principles which argue that adaptive methods should place most emphasis upon recent data and that regime-switching should likewise model transitions from the latest regime. The approach in this paper recognises that the most relevant dataset in the episodic, recurrent nature of electricity dynamics may not be the most recent. This methodology provides a dynamic calibration dataset approach that is based on cluster analysis applied to fundamental market regime indicators, as well as structural time series breakpoint analyses. Forecasting is based upon applying a hybrid fundamental optimisation model with a neural network to the appropriate calibration data. The results outperform other benchmark models in backtesting on data from the Iberian electricity market of 2017, which presents a considerable number of market structural breaks and evolving market price drivers.



Geosciences ◽  
2020 ◽  
Vol 10 (8) ◽  
pp. 302
Author(s):  
Francisco Serrano

From the study of the planktonic foraminifer assemblages of the sediments of the Ocean Drilling Program (ODP)-Site 975 (Baleares), sea-surface temperature, seasonality and salinity for the Pliocene and Gelasian of the Western Mediterranean were estimated. The estimates were carried out by the modern analog technique (MAT) using PaleoUma, a calibration dataset of 735 North-Atlantic and Mediterranean core-tops. In order to compare Pliocene–Gelasian and present-day analog assemblages, the necessary reduction of the taxonomic variables leads to statistically insignificant increases in estimation error, assessed on the calibration dataset itself. In addition, the correlation with δ18O results as an independent proxy, supports the use of MAT in order to establish the dominant paleoceanographic frameworks during the Pliocene and Gelasian. The SST curve shows an increase trend of the average value since the Early Zanclean (19.7 ± 1.8 °C) to the Late Piacenzian (20.9 ± 1.7 °C) and a decrease until the Late Gelasian (18.1 ± 1.4 °C). The seasonality offers permanently lower estimates than the current value (9.8 °C), reaching the closest values during the Late Gelasian (8.6 ± 0.8 °C). The salinity estimates are overall slightly lower during the Zanclean (36.7‰ ± 0.5‰) than today (37.3‰), whereas they reach up to more than 38.5‰, in the Early Piacenzian. The paleoceanographic frameworks deduced from the combination of the paleoceanographic parameters suggest that the current water-deficit regime in the Mediterranean was clearly predominant throughout the Pliocene and Gelasian. However, since the Piacenzian this regime alternates with stages of water surplus, which are especially frequent in the late Piacenzian. By the middle of the Early Gelasian the regime becomes more predominantly in deficit again.



Radiocarbon ◽  
2020 ◽  
Vol 62 (4) ◽  
pp. 1001-1043 ◽  
Author(s):  
Konrad A Hughen ◽  
Timothy J Heaton

ABSTRACTWe present new updates to the calendar and radiocarbon (14C) chronologies for the Cariaco Basin, Venezuela. Calendar ages were generated by tuning abrupt climate shifts in Cariaco Basin sediments to those in speleothems from Hulu Cave. After the original Cariaco-Hulu calendar age model was published, Hulu Cave δ18O records have been augmented with increased temporal resolution and a greater number of U/Th dates. These updated Hulu Cave records provide increased accuracy as well as precision in the final Cariaco calendar age model. The depth scale for the Ocean Drilling Program Site 1002D sediment core, the primary source of samples for 14C dating, has been corrected to account for missing sediment from a core break, eliminating age-depth anomalies that afflicted the earlier calendar age models. Individual 14C dates for the Cariaco Basin remain unchanged from previous papers, although detailed comparisons of the Cariaco calibration dataset to those from Hulu Cave and Lake Suigetsu suggest that the Cariaco marine reservoir age may have shifted systematically during the past. We describe these recent changes to the Cariaco datasets and provide the data in a comprehensive format that will facilitate use by the community.



Author(s):  
Shreyas S. Shivakumar ◽  
Neil Rodrigues ◽  
Alex Zhou ◽  
Ian D. Miller ◽  
Vijay Kumar ◽  
...  
Keyword(s):  


Radiocarbon ◽  
2020 ◽  
Vol 62 (4) ◽  
pp. 939-952 ◽  
Author(s):  
Charlotte Pearson ◽  
Lukas Wacker ◽  
Alex Bayliss ◽  
David Brown ◽  
Matthew Salzer ◽  
...  

ABSTRACTIn 2018 Pearson et al. published a new sequence of annual radiocarbon (14C) data derived from oak (Quercus sp.) trees from Northern Ireland and bristlecone pine (Pinus longaeva) from North America across the period 1700–1500 BC. The study indicated that the more highly resolved shape of an annually based calibration dataset could improve the accuracy of 14C calibration during this period. This finding had implications for the controversial dating of the eruption of Thera in the Eastern Mediterranean. To test for interlaboratory variation and improve the robustness of the annual dataset for calibration purposes, we have generated a replicate sequence from the same Irish oaks at ETH Zürich. These data are compatible with the Irish oak 14C dataset previously produced at the University of Arizona and are used (along with additional data) to examine inter-tree and interlaboratory variation in multiyear annual 14C time-series. The results raise questions about regional 14C offsets at different scales and demonstrate the potential of annually resolved 14C for refining subdecadal and larger scale features for calibration, solar reconstruction, and multiproxy synchronization.



Sign in / Sign up

Export Citation Format

Share Document