scholarly journals Adjustment of observation accuracy harmonisation parameters in optimising the network’s reliability

2018 ◽  
Vol 105 (1) ◽  
pp. 53-59
Author(s):  
Edward Nowak ◽  
Waldemar Odziemczyk

Abstract Appropriate precision and low cost are the basic conditions that have to be fulfilled by a project of a geodetic network. Reliability, translating into the ability to detect gross errors in the observations and higher certainty of the obtained point position, is an important network characteristic. The principal way to provide appropriate network reliability is to acquire a suitably large number of redundant observations. Optimisation of the observation accuracy harmonisation procedure allowing for the acquisition of an appropriate level of reliability through modification of the observation a priori standard deviations is the focus of this study. Parameterisation of the accuracy harmonisation is proposed. Furthermore, the influence of the individual parameter operation on the effectiveness of the harmonisation procedure is tested. Based on the results of the tests an optimal set of harmonisation parameters which guarantees the maximal efficiency of the harmonisation algorithm is proposed.

2018 ◽  
Vol 11 (4) ◽  
pp. 1937-1946 ◽  
Author(s):  
Jinsol Kim ◽  
Alexis A. Shusterman ◽  
Kaitlyn J. Lieschke ◽  
Catherine Newman ◽  
Ronald C. Cohen

Abstract. The newest generation of air quality sensors is small, low cost, and easy to deploy. These sensors are an attractive option for developing dense observation networks in support of regulatory activities and scientific research. They are also of interest for use by individuals to characterize their home environment and for citizen science. However, these sensors are difficult to interpret. Although some have an approximately linear response to the target analyte, that response may vary with time, temperature, and/or humidity, and the cross-sensitivity to non-target analytes can be large enough to be confounding. Standard approaches to calibration that are sufficient to account for these variations require a quantity of equipment and labor that negates the attractiveness of the sensors' low cost. Here we describe a novel calibration strategy for a set of sensors, including CO, NO, NO2, and O3, that makes use of (1) multiple co-located sensors, (2) a priori knowledge about the chemistry of NO, NO2, and O3, (3) an estimate of mean emission factors for CO, and (4) the global background of CO. The strategy requires one or more well calibrated anchor points within the network domain, but it does not require direct calibration of any of the individual low-cost sensors. The procedure nonetheless accounts for temperature and drift, in both the sensitivity and zero offset. We demonstrate this calibration on a subset of the sensors comprising BEACO2N, a distributed network of approximately 50 sensor “nodes”, each measuring CO2, CO, NO, NO2, O3 and particulate matter at 10 s time resolution and approximately 2 km spacing within the San Francisco Bay Area.


2017 ◽  
Author(s):  
Jinsol Kim ◽  
Alexis A. Shusterman ◽  
Kaitlyn J. Lieschke ◽  
Catherine Newman ◽  
Ronald C. Cohen

Abstract. The newest generation of air quality sensors is small, low cost, and easy to deploy. These sensors are an attractive option for developing dense observation networks in support of regulatory activities and scientific research. They are also of interest for use by individuals to characterize their home environment and for citizen science. However, these sensors are difficult to interpret. Although some have an approximately linear response to the target analyte, that response may vary with time, temperature, and/or humidity, and the cross-sensitivity to non-target analytes can be large enough to be confounding. Standard approaches to calibration that are sufficient to account for these variations require a quantity of equipment and labor that negates the attractiveness of the sensors’ low cost. Here we describe a novel calibration strategy for a set of sensors including CO, NO, NO2, and O3 that makes use of multiple co-located sensors, a priori knowledge about the chemistry of NO, NO2, and O3, as well as an estimate of mean emission factors for CO and the global background of CO. The strategy requires one or more well calibrated anchor points within the network domain, but it does not require direct calibration of any of the individual low-cost sensors. The procedure nonetheless accounts for temperature and drift, in both the sensitivity and zero offset. We demonstrate this calibration on a subset of the sensors comprising BEACO2N, a distributed network of approximately 50 sensor “nodes,” each measuring CO2, CO, NO, NO2, O3 and particle matter at 10 second time resolution at approximately 2 km spacing in locations surrounding the San Francisco Bay Area.


2018 ◽  
Vol 106 (1) ◽  
pp. 1-7
Author(s):  
Edward Nowak ◽  
Waldemar Odziemczyk

Abstract An optimally designed geodetic network is characterised by an appropriate level of precision and the lowest possible setup cost. Reliability, translating into the ability to detect blunders in the observations and higher certainty of the obtained point positions, is an important network characteristic. The principal way to provide appropriate network reliability is to acquire a suitably large number of redundant observations. This approach, however, faces limitations resulting from the extra cost. This paper analyses the possibility of providing appropriate reliability parameters for networks with moderate redundancy. A common problem in such cases are dependencies between observations preventing the acquisition of the required reliability index for each of the individual observation. The authors propose a methodology to analyse dependencies between observations aiming to determine the possibility of acquiring the optimal reliability indices for each individual observation or groups of observations. The suggested network structure analysis procedures were illustrated with numerical examples.


2021 ◽  
Vol 47 (4) ◽  
pp. 392-401
Author(s):  
Volker Kaul

Liberalism believes that individuals are endowed a priori with reason or at least agency and it is up to that reason and agency to make choices, commitments and so on. Communitarianism criticizes liberalism’s explicit and deliberate neglect of the self and insists that we attain a self and identity only through the effective recognition of significant others. However, personal autonomy does not seem to be a default position, neither reason nor community is going to provide it inevitably. Therefore, it is so important to go beyond the liberal–communitarian divide. This article is analysing various proposals in this direction, asks about the place of communities and the individual in times of populism and the pandemic and provides a global perspective on the liberal–communitarian debate.


2021 ◽  
Vol 11 (4) ◽  
pp. 1399
Author(s):  
Jure Oder ◽  
Cédric Flageul ◽  
Iztok Tiselj

In this paper, we present uncertainties of statistical quantities of direct numerical simulations (DNS) with small numerical errors. The uncertainties are analysed for channel flow and a flow separation case in a confined backward facing step (BFS) geometry. The infinite channel flow case has two homogeneous directions and this is usually exploited to speed-up the convergence of the results. As we show, such a procedure reduces statistical uncertainties of the results by up to an order of magnitude. This effect is strongest in the near wall regions. In the case of flow over a confined BFS, there are no such directions and thus very long integration times are required. The individual statistical quantities converge with the square root of time integration so, in order to improve the uncertainty by a factor of two, the simulation has to be prolonged by a factor of four. We provide an estimator that can be used to evaluate a priori the DNS relative statistical uncertainties from results obtained with a Reynolds Averaged Navier Stokes simulation. In the DNS, the estimator can be used to predict the averaging time and with it the simulation time required to achieve a certain relative statistical uncertainty of results. For accurate evaluation of averages and their uncertainties, it is not required to use every time step of the DNS. We observe that statistical uncertainty of the results is uninfluenced by reducing the number of samples to the point where the period between two consecutive samples measured in Courant–Friedrichss–Levy (CFL) condition units is below one. Nevertheless, crossing this limit, the estimates of uncertainties start to exhibit significant growth.


2020 ◽  
Vol 68 (6) ◽  
pp. 817-847
Author(s):  
Sebastian Gardner

AbstractCritics have standardly regarded Sartre’s Critique of Dialectical Reason as an abortive attempt to overcome the subjectivist individualism of his early philosophy, motivated by a recognition that Being and Nothingness lacks ethical and political significance, but derailed by Sartre’s Marxism. In this paper I offer an interpretation of the Critique which, if correct, shows it to offer a coherent and highly original account of social and political reality, which merits attention both in its own right and as a reconstruction of the philosophical foundation of Marxism. The key to Sartre’s theory of collective and historical existence in the Critique is a thesis carried over from Being and Nothingness: intersubjectivity on Sartre’s account is inherently aporetic, and social ontology reproduces in magnified form its limited intelligibility, lack of transparency, and necessary frustration of the demands of freedom. Sartre’s further conjecture – which can be formulated a priori but requires a posteriori verification – is that man’s collective historical existence may be understood as the means by which the antinomy within human freedom, insoluble at the level of the individual, is finally overcome. The Critique provides therefore the ethical theory promised in Being and Nothingness.


2019 ◽  
Vol 77 (2) ◽  
pp. 115-121
Author(s):  
Annina Ropponen ◽  
Katalin Gémes ◽  
Paolo Frumento ◽  
Gino Almondo ◽  
Matteo Bottai ◽  
...  

ObjectivesWe aimed to develop and validate a prediction model for the duration of sickness absence (SA) spells due to back pain (International Statistical Classification of Diseases and Related Health Problems 10th Revision: M54), using Swedish nationwide register microdata.MethodsInformation on all new SA spells >14 days from 1 January 2010 to 30 June 2012 and on possible predictors were obtained. The duration of SA was predicted by using piecewise constant hazard models. Nine predictors were selected for the final model based on a priori decision and log-likelihood loss. The final model was estimated in a random sample of 70% of the SA spells and later validated in the remaining 30%.ResultsOverall, 64 048 SA spells due to back pain were identified during the 2.5 years; 74% lasted ≤90 days, and 9% >365 days. The predictors included in the final model were age, sex, geographical region, employment status, multimorbidity, SA extent at the start of the spell, initiation of SA spell in primary healthcare and number of SA days and specialised outpatient healthcare visits from the preceding year. The overall c-statistic (0.547, 95% CI 0.542 to 0.552) suggested a low discriminatory capacity at the individual level. The c-statistic was 0.643 (95% CI 0.634 to 0.652) to predict >90 days spells, 0.686 (95% CI 0.676 to 0.697) to predict >180 spells and 0.753 (95% CI 0.740 to 0.766) to predict >365 days spells.ConclusionsThe model discriminates SA spells >365 days from shorter SA spells with good discriminatory accuracy.


Author(s):  
Shibnath Mukherjee ◽  
Aryya Gangopadhyay ◽  
Zhiyuan Chen

While data mining has been widely acclaimed as a technology that can bring potential benefits to organizations, such efforts may be negatively impacted by the possibility of discovering sensitive patterns, particularly in patient data. In this article the authors present an approach to identify the optimal set of transactions that, if sanitized, would result in hiding sensitive patterns while reducing the accidental hiding of legitimate patterns and the damage done to the database as much as possible. Their methodology allows the user to adjust their preference on the weights assigned to benefits in terms of the number of restrictive patterns hidden, cost in terms of the number of legitimate patterns hidden, and damage to the database in terms of the difference between marginal frequencies of items for the original and sanitized databases. Most approaches in solving the given problem found in literature are all-heuristic based without formal treatment for optimality. While in a few work, ILP has been used previously as a formal optimization approach, the novelty of this method is the extremely low cost-complexity model in contrast to the others. They implement our methodology in C and C++ and ran several experiments with synthetic data generated with the IBM synthetic data generator. The experiments show excellent results when compared to those in the literature.


Author(s):  
T. Guo ◽  
A. Capra ◽  
M. Troyer ◽  
A. Gruen ◽  
A. J. Brooks ◽  
...  

Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.


2019 ◽  
Vol 3 (1) ◽  
pp. 67
Author(s):  
Kyle Goslin ◽  
Markus Hofmann

<p>Automatic Search Query Enhancement (ASQE) is the process of modifying a user submitted search query and identifying terms that can be added or removed to enhance the relevance of documents retrieved from a search engine. ASQE differs from other enhancement approaches as no human interaction is required. ASQE algorithms typically rely on a source of a priori knowledge to aid the process of identifying relevant enhancement terms. This paper describes the results of a qualitative analysis of the enhancement terms generated by the Wikipedia NSubstate Algorithm (WNSSA) for ASQE. The WNSSA utilises Wikipedia as the sole source of a priori knowledge during the query enhancement process. As each Wikipedia article typically represents a single topic, during the enhancement process of the WNSSA, a mapping is performed between the user’s original search query and Wikipedia articles relevant to the query. If this mapping is performed correctly, a collection of potentially relevant terms and acronyms are accessible for ASQE. This paper reviews the results of a qualitative analysis process performed for the individual enhancement term generated for each of the 50 test topics from the TREC-9 Web Topic collection. The contributions of this paper include: (a) a qualitative analysis of generated WNSSA search query enhancement terms and (b) an analysis of the concepts represented in the TREC-9 Web Topics, detailing interpretation issues during query-to-Wikipedia article mapping performed by the WNSSA.</p>


Sign in / Sign up

Export Citation Format

Share Document