linear correction
Recently Published Documents


TOTAL DOCUMENTS

111
(FIVE YEARS 34)

H-INDEX

13
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Danielle Heiner ◽  
Dakota Folmsbee ◽  
Luke Langkamp ◽  
Geoffrey Hutchison

Given the importance of accurate polarizability calculations to many chemical applications, coupled with the need for efficiency when calculating the properties of sets of molecules or large oligomers, we present a benchmark study examining possible calculation methods for polarizable materials. We first investigate the accuracy of highly-efficient semi-empirical tight-binding method GFN2-xTB, and the popular D4 dispersion model, comparing its predicted additive polarizabilities to ωB97X-D results for a subset of PubChemQC and a compiled benchmark set of molecules spanning polarizabilities from approximately 3-600 Å^3, with a few compounds in the range of approximately 1200-1400 Å^3. Although we find GFN2 to have large errors with polarizability calculations, on large oligomers it would appear a quadratic correction factor can remedy this. We also compare the accuracy of DFT polarizability calculations run using basis sets of varying size and level of augmentation, determining that a non-augmented basis set may be used for highly polarizable species in conjunction with a linear correction factor to achieve accuracy extremely close to that of aug-cc-pVTZ.


2021 ◽  
Author(s):  
Danielle Heiner ◽  
Dakota Folmsbee ◽  
Luke Langkamp ◽  
Geoffrey Hutchison

Given the importance of accurate polarizability calculations to many chemical applications, coupled with the need for efficiency when calculating the properties of sets of molecules or large oligomers, we present a benchmark study examining possible calculation methods for polarizable materials. We first investigate the accuracy of highly-efficient semi-empirical tight-binding method GFN2-xTB, and the popular D4 dispersion model, comparing its predicted additive polarizabilities to ωB97X-D results for a subset of PubChemQC and a compiled benchmark set of molecules spanning polarizabilities from approximately 3-600 Å^3, with a few compounds in the range of approximately 1200-1400 Å^3. Although we find GFN2 to have large errors with polarizability calculations, on large oligomers it would appear a quadratic correction factor can remedy this. We also compare the accuracy of DFT polarizability calculations run using basis sets of varying size and level of augmentation, determining that a non-augmented basis set may be used for highly polarizable species in conjunction with a linear correction factor to achieve accuracy extremely close to that of aug-cc-pVTZ.


2021 ◽  
Author(s):  
Zhicheng Lin ◽  
Qi Ma ◽  
Yang Zhang

Materials in research studies are often presented on digital screens to participants across many subfields of psychology, including clinical, developmental, and social/personality psychology. What is often neglected in the current practice is the reporting of critical visual properties, such as luminance, color, contrast, and gamma, which can dramatically affect the appearance of visual materials. Conventional luminance measurement equipment in vision science is both expensive and onerous to operate for novices. A pressing need—if we are to improve current research practice and education in psychology—is to develop affordable and user-friendly tools to measure and calibrate luminance/color. Here we have developed a software package—PsyCalibrator—that takes advantage of low-cost hardware (SpyderX) and makes luminance and color measurement and calibration accessible and user-friendly. Validation of luminance measurement shows that, in addition to excellent accuracy in linear correction, SpyderX performs at the same level compared with professional, high-cost photometers (MKII and PR-670) in measurement accuracy. SpyderX also has very low measurement variances for practical purposes. A tutorial is provided on how to use PsyCalibrator to measure luminance and color and to calibrate displays. Finally, gamma calibration based on visual methods (without photometers) is discussed, together with its own validation and tutorial.


Author(s):  
Jason S. McCarley

Signal detection analyses often attribute the vigilance decrement to a combination of bias shifts and sensitivity losses. In many vigilance experiments, however, false alarm rates are at or near zero, complicating the analysis of sensitivity. Here, we report Monte Carlo simulations comparing three measures of sensitivity that can be calculated even with extreme hit and false alarm rates: A’, an estimate of the area under the curve that is commonly but mistakenly described as nonparametric; Az calculated using the log-linear correction, a statistic that adjusts individual observers’ data to protect against low false alarm rates; and, 4z estimated using a Bayesian hierarchical procedure, a measure that protects against extreme false alarm rates by sharing information between observers. Results confirm that bias shifts produce spurious changes in A’, and demonstrate that, 4z estimated with either a log-linear correction or through hierarchical Bayesian modeling is more robust against low false alarm rates.


2021 ◽  
Vol 28 (3) ◽  
pp. 215-222
Author(s):  
Yong-Chan Ha ◽  
Jun-Il Yoo

Background: The aim of this study was to investigate the correlation between bone mineral density (BMD) and body composition measured by the Osteosys Primus® and the GE Lunar Prodigy® and to calculate the conversion rate between the 2 devices.Methods: The 40 subjects were men and women in aged 20 to 29 years old. All participants were scanned twice on both the Osteosys Primus (OsteoSys) and the GE Lunar Prodigy (GE Healthcare) DXA systems using the manufacturers’ standard scanning and positioning protocols.Results: Compared to the GE Lunar device, the mean Osteosys fat mass was overestimated to be 12.1% (1,776.9 g) in the whole body, 5.1% (163.9 g) in gynoid, and 6.7% (87.2 g) in android. Compared with the GE Lunar device, the mean BMDs of the Osteosys Primus were underestimated to be 2.3% (0.023 g/cm2) in the whole body and 3.1% (0.035 g/cm2) in L1-4. Compared with the GE Lunar device, the mean lean mass derived by the Osteosys Primus were underestimated to 2.3% (1,045.3 g) in the total body, 3.8% (179.4 g) in arms, and 7.7% (1,104.8 g) in legs, respectively. There were a strong correlation of BMD and body composition between both groups.Conclusions: Linear correction equations were developed to ensure comparability of BMD and muscle mass between the Osteosys Primus and the GE Lunar Prodigy. Importantly, use of equations from previous studies would have increased the discrepancy between the Osteosys Primus and the GE Lunar Prodigy.


2021 ◽  
Vol 82 (7) ◽  
pp. 1144-1168
Author(s):  
Yu. G. Kokunko ◽  
S. A. Krasnova ◽  
V. A. Utkin

2021 ◽  
Vol 13 (5) ◽  
pp. 2053-2075
Author(s):  
Ethan R. Dale ◽  
Stefanie Kremser ◽  
Jordis S. Tradowsky ◽  
Greg E. Bodeker ◽  
Leroy J. Bird ◽  
...  

Abstract. MAPM (Mapping Air Pollution eMissions) is a project whose goal is to develop a method to infer airborne particulate matter (PM) emissions maps from in situ PM concentration measurements. In support of MAPM, a winter field campaign was conducted in New Zealand in 2019 (June to September) to obtain the measurements required to test and validate the MAPM methodology. Two different types of instruments measuring PM were deployed: ES-642 remote dust monitors (17 instruments) and Outdoor Dust Information Nodes (ODINs; 50 instruments). The measurement campaign was bracketed by two intercomparisons where all instruments were co-located, with a permanently installed tapered element oscillating membrane (TEOM) instrument, to determine any instrument biases. Changes in biases between the pre- and post-campaign intercomparisons were used to determine instrument drift over the campaign period. Once deployed, each ES-642 was co-located with an ODIN. In addition to the PM measurements, meteorological variables (temperature, pressure, wind speed, and wind direction) were measured at three automatic weather station (AWS) sites established as part of the campaign, with additional data being sourced from 27 further AWSs operated by other agencies. Vertical profile measurements were made with 12 radiosondes during two 24 h periods and complimented measurements made with a mini micropulse lidar and ceilometer. Here we present the data collected during the campaign and discuss the correction of the measurements made by various PM instruments. We find that when compared to measurements made with a simple linear correction, a correction based on environmental conditions improves the quality of measurements retrieved from ODINs but results in over-fitting and increases the uncertainties when applied to the more sophisticated ES-642 instruments. We also compare PM2.5 and PM10 measured by ODINs which, in some cases, allows us to identify PM from natural and anthropogenic sources. The PM data collected during the campaign are publicly available from https://doi.org/10.5281/zenodo.4542559 (Dale et al., 2020b), and the data from other instruments are available from https://doi.org/10.5281/zenodo.4536640 (Dale et al., 2020a).


Author(s):  
Xavier D’Haultfœuille ◽  
Lucas Girard ◽  
Roland Rathelot

Suppose that a population, composed of a minority and a majority group, is allocated into units, which can be neighborhoods, firms, classrooms, etc. Qualitatively, there is some segregation whenever allocation leads to the concentration of minority individuals in some units more than in others. Quantitative measures of segregation have struggled with the small-unit bias. When units contain few individuals, indices based on the minority shares in units are upward biased. For instance, they would point to a positive amount of segregation even when allocation is strictly random. The command segregsmall implements three recent methods correcting for such bias: the nonparametric, partial identification approach of D’Haultfœuille and Rathelot (2017, Quantitative Economics 8: 39–73); the parametric model of Rathelot (2012, Journal of Business & Economic Statistics 30: 546–553); and the linear correction of Carrington and Troske (1997, Journal of Business & Economic Statistics 15: 402–409). The package also allows for conditional analyses, namely, measures of segregation accounting for characteristics of the individuals or the units.


Sign in / Sign up

Export Citation Format

Share Document