fractional error
Recently Published Documents


TOTAL DOCUMENTS

35
(FIVE YEARS 9)

H-INDEX

9
(FIVE YEARS 2)

Materials ◽  
2021 ◽  
Vol 14 (23) ◽  
pp. 7458
Author(s):  
Karolina Kiełbasa ◽  
Adrianna Kamińska ◽  
Oliwier Niedoba ◽  
Beata Michalkiewicz

Activated carbons with different textural characteristic were derived by the chemical activation of raw beet molasses with solid KOH, while the activation temperature was changed in the range 650 °C to 800 °C. The adsorption of CO2 on activated carbons was investigated. Langmuir, Freundlich, Sips, Toth, Unilan, Fritz-Schlunder, Radke-Prausnitz, Temkin-Pyzhev, Dubinin-Radushkevich, and Jovanovich equations were selected to fit the experimental data of CO2 adsorption. An error analysis (the sum of the squares of errors, the hybrid fractional error function, the average relative error, the Marquardt’s percent standard deviation, and the sum of the absolute errors) was conducted to examine the effect of using various error standards for the isotherm model parameter calculation. The best fit was observed to the Radke-Prausnitz model.


Author(s):  
Faro A. A. ◽  
◽  
Salam K. K. ◽  
Jeremiah O. A. ◽  
Akinwole I. O. ◽  
...  

The importance of Pressure Vessel (PV) to industries is one of the reasons why the design and structural integrity should be fully understood and considered when deploring it in under different conditions. The design of such vessels need to be broadened with a detailed thermal stress due to its time-dependent different behaviours experienced under load. Therefore, this study aimed at investigation of transient analysis of PV when subjected to different operating condition. The PV used for this simulation was designed based on American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (BPVC) 2019 and subjected to transient-stress analysis (transient thermal and structural) using ANSYS software. A complete evaluation of temperature, heat flux and resulting stress distribution across the vessel was estimated at four different locations within the designed PV and the obtained result was compared with analytically obtained results from appropriate standards. The accuracy of the result obtained from PV was validated using analysis of Mesh Independent Study (MIS), Grid Convergence Index (GCI) and fractional error obtained between the fine grid used. The results showed that there were different temperature and heat flux distribution at the considered locations, these varied distributions or change are according to various transients which are as a result of the load applied to the PV. The simulated maximum principal stress value was close to the analytically computed stress with a percentage error of 2.65% with respect to the analytically obtained result. The analysed maximum stress (W analysed) value 3400 MPa, obtained from MIS study was close to 3210 MPa obtained for maximum stress using numerical approach (WN). The GCI value obtained was 0.073 and fractional error of -0.003 which show that the result presented are grid independent solution.


2020 ◽  
Vol 13 (11) ◽  
pp. 6343-6355 ◽  
Author(s):  
David H. Hagan ◽  
Jesse H. Kroll

Abstract. Low-cost sensors for measuring particulate matter (PM) offer the ability to understand human exposure to air pollution at spatiotemporal scales that have previously been impractical. However, such low-cost PM sensors tend to be poorly characterized, and their measurements of mass concentration can be subject to considerable error. Recent studies have investigated how individual factors can contribute to this error, but these studies are largely based on empirical comparisons and generally do not examine the role of multiple factors simultaneously. Here, we present a new physics-based framework and open-source software package (opcsim) for evaluating the ability of low-cost optical particle sensors (optical particle counters and nephelometers) to accurately characterize the size distribution and/or mass loading of aerosol particles. This framework, which uses Mie theory to calculate the response of a given sensor to a given particle population, is used to estimate the fractional error in mass loading for different sensor types given variations in relative humidity, aerosol optical properties, and the underlying particle size distribution. Results indicate that such error, which can be substantial, is dependent on the sensor technology (nephelometer vs. optical particle counter), the specific parameters of the individual sensor, and differences between the aerosol used to calibrate the sensor and the aerosol being measured. We conclude with a summary of likely sources of error for different sensor types, environmental conditions, and particle classes and offer general recommendations for the choice of calibrant under different measurement scenarios.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Tiberiu Tesileanu ◽  
Mary M Conte ◽  
John J Briguglio ◽  
Ann M Hermundstad ◽  
Jonathan D Victor ◽  
...  

Previously, in Hermundstad et al., 2014, we showed that when sampling is limiting, the efficient coding principle leads to a ‘variance is salience’ hypothesis, and that this hypothesis accounts for visual sensitivity to binary image statistics. Here, using extensive new psychophysical data and image analysis, we show that this hypothesis accounts for visual sensitivity to a large set of grayscale image statistics at a striking level of detail, and also identify the limits of the prediction. We define a 66-dimensional space of local grayscale light-intensity correlations, and measure the relevance of each direction to natural scenes. The ‘variance is salience’ hypothesis predicts that two-point correlations are most salient, and predicts their relative salience. We tested these predictions in a texture-segregation task using un-natural, synthetic textures. As predicted, correlations beyond second order are not salient, and predicted thresholds for over 300 second-order correlations match psychophysical thresholds closely (median fractional error <0.13).


2020 ◽  
Vol 10 (2) ◽  
Author(s):  
Hassan Jameel Asghar ◽  
Ming Ding ◽  
Thierry Rakotoarivelo ◽  
Sirine Mrabet ◽  
Dali Kaafar

We propose a generic mechanism to efficiently release differentially private synthetic versions of high-dimensional datasets with high utility. The core technique in our mechanism is the use of copulas, which are functions representing dependencies among random variables with a multivariate distribution. Specifically, we use the Gaussian copula to define dependencies of attributes in the input dataset, whose rows are modelled as samples from an unknown multivariate distribution, and then sample synthetic records through this copula. Despite the inherently numerical nature of Gaussian correlations we construct a method that is applicable to both numerical and categorical attributes alike. Our mechanism is efficient in that it only takes time proportional to the square of the number of attributes in the dataset. We propose a differentially private way of constructing the Gaussian copula without compromising computational efficiency. Through experiments on three real-world datasets, we show that we can obtain highly accurate answers to the set of all one-way marginal, and two-and three-way positive conjunction queries, with 99% of the query answers having absolute (fractional) error rates between 0.01 to 3%. Furthermore, for a majority of two-way and three-way queries, we outperform independent noise addition through the well-known Laplace mechanism. In terms of computational time we demonstrate that our mechanism can output synthetic datasets in around 6 minutes 47 seconds on average with an input dataset of about 200 binary attributes and more than 32,000 rows, and about 2 hours 30 mins to execute a much larger dataset of about 700 binary attributes and more than 5 million rows. To further demonstrate scalability, we ran the mechanism on larger (artificial) datasets with 1,000 and 2,000 binary attributes (and 5 million rows) obtaining synthetic outputs in approximately 6 and 19 hours, respectively. These are highly feasible times for synthetic datasets, which are one-off releases.


2020 ◽  
Vol 496 (2) ◽  
pp. 1718-1729 ◽  
Author(s):  
Wolfgang Enzi ◽  
Simona Vegetti ◽  
Giulia Despali ◽  
Jen-Wei Hsueh ◽  
R Benton Metcalf

ABSTRACT We present the analysis of a sample of 24 SLACS-like galaxy–galaxy strong gravitational lens systems with a background source and deflectors from the Illustris-1 simulation. We study the degeneracy between the complex mass distribution of the lenses, substructures, the surface brightness distribution of the sources, and the time delays. Using a novel inference framework based on Approximate Bayesian Computation, we find that for all the considered lens systems, an elliptical and cored power-law mass density distribution provides a good fit to the data. However, the presence of cores in the simulated lenses affects most reconstructions in the form of a Source Position Transformation. The latter leads to a systematic underestimation of the source sizes by 50 per cent on average, and a fractional error in H0 of around $25_{-19}^{+37}$ per cent. The analysis of a control sample of 24 lens systems, for which we have perfect knowledge about the shape of the lensing potential, leads to a fractional error on H0 of $12_{-3}^{+6}$ per cent. We find no degeneracy between complexity in the lensing potential and the inferred amount of substructures. We recover an average total projected mass fraction in substructures of fsub &lt; 1.7–2.0 × 10−3 at the 68 per cent confidence level in agreement with zero and the fact that all substructures had been removed from the simulation. Our work highlights the need for higher resolution simulations to quantify the lensing effect of more realistic galactic potentials better, and that additional observational constraint may be required to break existing degeneracies.


2020 ◽  
Vol 146 ◽  
pp. 04001 ◽  
Author(s):  
Matthew Andrew

A novel method for permeability prediction is presented using multivariant structural regression. A machine learning based model is trained using a large number (2,190, extrapolated to 219,000) of synthetic datasets constructed using a variety of object-based techniques. Permeability, calculated on each of these networks using traditional digital rock approaches, was used as a target function for a multivariant description of the pore network structure, created from the statistics of a discrete description of grains, pores and throats, generated through image analysis. A regression model was created using an Extra-Trees method with an error of <4% on the target set. This model was then validated using a composite series of data created both from proprietary datasets of carbonate and sandstone samples and open source data available from the Digital Rocks Portal (www.digitalrocksporta.org) with a Root Mean Square Fractional Error of <25%. Such an approach has wide applicability to problems of heterogeneity and scale in pore scale analysis of porous media, particularly as it has the potential of being applicable on 2D as well as 3D data.


2019 ◽  
Author(s):  
Tiberiu Teşileanu ◽  
Mary M. Conte ◽  
John J. Briguglio ◽  
Ann M. Hermundstad ◽  
Jonathan D. Victor ◽  
...  

AbstractPreviously, in [1], we showed that when sampling is limiting, the efficient coding principle leads to a “variance is salience” hypothesis, and that this hypothesis accounts for visual sensitivity to binary image statistics. Here, using extensive new psychophysical data and image analysis, we show that this hypothesis accounts for visual sensitivity to a large set of grayscale image statistics at a striking level of detail, and also identify the limits of the prediction. We define a 66-dimensional space of local grayscale light-intensity correlations, and measure the relevance of each direction to natural scenes. The “variance is salience” hypothesis predicts that two-point correlations are most salient, and predicts their relative salience. We tested these predictions in a texture-segregation task using un-natural, synthetic textures. As predicted, correlations beyond second order are not salient, and predicted thresholds for over 300 second-order correlations match psychophysical thresholds closely (median fractional error < 0.13).


2019 ◽  
Vol 19 (5) ◽  
pp. 2787-2812 ◽  
Author(s):  
Betty Croft ◽  
Randall V. Martin ◽  
W. Richard Leaitch ◽  
Julia Burkart ◽  
Rachel Y.-W. Chang ◽  
...  

Abstract. Summertime Arctic aerosol size distributions are strongly controlled by natural regional emissions. Within this context, we use a chemical transport model with size-resolved aerosol microphysics (GEOS-Chem-TOMAS) to interpret measurements of aerosol size distributions from the Canadian Arctic Archipelago during the summer of 2016, as part of the “NETwork on Climate and Aerosols: Addressing key uncertainties in Remote Canadian Environments” (NETCARE) project. Our simulations suggest that condensation of secondary organic aerosol (SOA) from precursor vapors emitted in the Arctic and near Arctic marine (ice-free seawater) regions plays a key role in particle growth events that shape the aerosol size distributions observed at Alert (82.5∘ N, 62.3∘ W), Eureka (80.1∘ N, 86.4∘ W), and along a NETCARE ship track within the Archipelago. We refer to this SOA as Arctic marine SOA (AMSOA) to reflect the Arctic marine-based and likely biogenic sources for the precursors of the condensing organic vapors. AMSOA from a simulated flux (500 µgm-2day-1, north of 50∘ N) of precursor vapors (with an assumed yield of unity) reduces the summertime particle size distribution model–observation mean fractional error 2- to 4-fold, relative to a simulation without this AMSOA. Particle growth due to the condensable organic vapor flux contributes strongly (30 %–50 %) to the simulated summertime-mean number of particles with diameters larger than 20 nm in the study region. This growth couples with ternary particle nucleation (sulfuric acid, ammonia, and water vapor) and biogenic sulfate condensation to account for more than 90 % of this simulated particle number, which represents a strong biogenic influence. The simulated fit to summertime size-distribution observations is further improved at Eureka and for the ship track by scaling up the nucleation rate by a factor of 100 to account for other particle precursors such as gas-phase iodine and/or amines and/or fragmenting primary particles that could be missing from our simulations. Additionally, the fits to the observed size distributions and total aerosol number concentrations for particles larger than 4 nm improve with the assumption that the AMSOA contains semi-volatile species: the model–observation mean fractional error is reduced 2- to 3-fold for the Alert and ship track size distributions. AMSOA accounts for about half of the simulated particle surface area and volume distributions in the summertime Canadian Arctic Archipelago, with climate-relevant simulated summertime pan-Arctic-mean top-of-the-atmosphere aerosol direct (−0.04 W m−2) and cloud-albedo indirect (−0.4 W m−2) radiative effects, which due to uncertainties are viewed as an order of magnitude estimate. Future work should focus on further understanding summertime Arctic sources of AMSOA.


2018 ◽  
Author(s):  
Betty Croft ◽  
Randall V. Martin ◽  
W. Richard Leaitch ◽  
Julia Burkart ◽  
Rachel Y.-W. Chang ◽  
...  

Abstract. Summertime Arctic aerosol size distributions are strongly controlled by natural regional emissions. Within this context, we use a chemical transport model with size-resolved aerosol microphysics (GEOS-Chem-TOMAS) to interpret measurements of aerosol size distributions from the Canadian Arctic Archipelago during the summer of 2016, as part of the NETwork on Climate and Aerosols: addressing key uncertainties in Remote Canadian Environments (NETCARE). Our simulations suggest that condensation of secondary organic aerosol (SOA) from precursor vapors emitted in the Arctic and near Arctic marine (open ocean and coastal) regions plays a key role in particle growth events that shape the aerosol size distributions observed at Alert (82.5° N, 62.3° W), Eureka (80.1° N, 86.4° W), and along a NETCARE ship track within the Archipelago. We refer to this SOA as Arctic marine SOA (Arctic MSOA) to reflect the Arctic marine-based and likely biogenic sources for the precursors of the condensing organic vapors. Arctic MSOA from a simulated flux (500 μg m−2 d−1, north of 50° N) of precursor vapors (assumed yield of unity) reduces the summertime particle size distribution model-observation mean fractional error by 2- to 4-fold, relative to a simulation without this Arctic MSOA. Particle growth due to the condensable organic vapor flux contributes strongly (30–50 %) to the simulated summertime-mean number of particles with diameters larger than 20 nm in the study region, and couples with ternary particle nucleation (sulfuric acid, ammonia, and water vapor) and biogenic sulfate condensation to account for more than 90 % of this simulated particle number, a strong biogenic influence. The simulated fit to summertime size-distribution observations is further improved at Eureka and for the ship track by scaling up the nucleation rate by a factor of 100 to account for other particle precursors such as gas-phase iodine and/or amines and/or fragmenting primary particles that could be missing from our simulations. Additionally, the fits to observed size distributions and total aerosol number concentrations for particles larger than 4 nm improve with the assumption that the Arctic MSOA contains semi-volatile species; reducing model-observation mean fractional error by 2- to 3-fold for the Alert and ship track size distributions. Arctic MSOA accounts for more than half of the simulated total particulate organic matter mass concentrations in the summertime Canadian Arctic Archipelago, and this Arctic MSOA has strong simulated summertime pan-Arctic-mean top-of-the-atmosphere aerosol direct (−0.04 W m−2) and cloud-albedo indirect (−0.4 W m−2) radiative effects. Future work should focus on further understanding summertime Arctic sources of Arctic MSOA.


Sign in / Sign up

Export Citation Format

Share Document