Data Analysis for Tracking and Optimizing Boiler Performance

Author(s):  
Kushi Sellahennedige ◽  
Jason Lee

In order to understand how a boiler is performing/ operating, it is critical to obtain data throughout its operation. Data collection and storage methods have evolved through the years improving the quality and quantity of the data. Data is valuable for tracking current unit performance, troubleshooting and helping to narrow down any potential issues/ concerns with performance. Proper use of data collection and analysis may minimize the need for scheduled performance testing except when specific data points are required. This paper will discuss how sensitivity analysis can be utilized to determine the effect lack of/poor quality data has on the desired analysis. It discusses data collection and evaluation for various cases and the relevant ASME codes. Other key features of the paper are the various methods available for data representation, allowing the engineer to easily track key operating parameters.

2006 ◽  
Vol 21 (1) ◽  
pp. 67-70 ◽  
Author(s):  
Brian H. Toby

The definitions for important Rietveld error indices are defined and discussed. It is shown that while smaller error index values indicate a better fit of a model to the data, wrong models with poor quality data may exhibit smaller values error index values than some superb models with very high quality data.


2019 ◽  
Author(s):  
Fiona Pye ◽  
Nussaȉbah B Raja ◽  
Bryan Shirley ◽  
Ádám T Kocsis ◽  
Niklas Hohmann ◽  
...  

In a world where an increasing number of resources are hidden behind paywalls and monthly subscriptions, it is becoming crucial for the scientific community to invest energy into freely available, community-maintained systems. Open-source software projects offer a solution, with freely available code which users can utilise and modify, under an open source licence. In addition to software accessibility and methodological repeatability, this also enables and encourages the development of new tools. As palaeontology moves towards data driven methodologies, it is becoming more important to acquire and provide high quality data through reproducible systematic procedures. Within the field of morphometrics, it is vital to adopt digital methods that help mitigate human bias from data collection. In addition,m mathematically founded approaches can reduce subjective decisions which plague classical data. This can be further developed through automation, which increases the efficiency of data collection and analysis. With these concepts in mind, we introduce two open-source shape analysis software, that arose from projects within the medical imaging field. These are ImageJ, an image processing program with batch processing features, and 3DSlicer which focuses on 3D informatics and visualisation. They are easily extensible using common programming languages, with 3DSlicer containing an internal python interactor, and ImageJ allowing the incorporation of several programming languages within its interface alongside its own simplified macro language. Additional features created by other users are readily available, on GitHub or through the software itself. In the examples presented, an ImageJ plugin “FossilJ” has been developed which provides semi-automated morphometric bivalve data collection. 3DSlicer is used with the extension SPHARM-PDM, applied to synchrotron scans of coniform conodonts for comparative morphometrics, for which small assistant tools have been created.


2019 ◽  
pp. 23-34
Author(s):  
Harvey Goldstein ◽  
Ruth Gilbert

his chapter addresses data linkage which is key to using big administrative datasets to improve efficient and equitable services and policies. These benefits need to weigh against potential harms, which have mainly focussed on privacy. In this chapter we argue for the public and researchers to be alert also to other kinds of harms. These include misuses of big administrative data through poor quality data, misleading analyses, misinterpretation or misuse of findings, and restrictions limiting what questions can be asked and by whom, resulting in research not achieved and advances not made for the public benefit. Ensuring that big administrative data are validly used for public benefit requires increased transparency about who has access and whose access is denied, how data are processed, linked and analysed, and how analyses or algorithms are used in public and private services. Public benefits and especially trust require replicable analyses by many researchers not just a few data controllers. Wider use of big data will be helped by establishing a number of safe data repositories, fully accessible to researchers and their tools, and independent of the current monopolies on data processing, linkage, enhancement and uses of data.


2014 ◽  
Vol 9 (1) ◽  
pp. 69-77 ◽  
Author(s):  
Toshio Okazumi ◽  
◽  
Mamoru Miyamoto ◽  
Badri Bhakta Shrestha ◽  
Maksym Gusyev

Flood risk assessment should be one of the basic methods for disaster damage mitigation to identify and estimate potential damage before disasters and to provide appropriate information for countermeasures. Existing methods usually do not account for uncertainty in risk assessment results. The concept of uncertainty is especially important for developing countries where risk assessment results may often be unreliable due to inadequate and poor quality data. We focus on three questions concerning risk assessment results in this study: a) How much does lack of data in developing countries influence flood risk assessment results? b) Which datamost influence the results? and c) Which data should be prioritized in data collection to improve risk assessment effectiveness? We found the largest uncertainty in the damage data among observation, model, and agricultural damage calculations. We conclude that reliable disaster damage data collection must be emphasized to obtain reliable flood risk assessment results and prevent uncertainty where possible. We propose actions to improve assessment task efficiency and investment effectiveness for developing countries.


2017 ◽  
Vol 49 (4) ◽  
pp. 415-424 ◽  
Author(s):  
Susan WILL-WOLF ◽  
Sarah JOVAN ◽  
Michael C. AMACHER

AbstractLichen element content is a reliable indicator for relative air pollution load in research and monitoring programmes requiring both efficiency and representation of many sites. We tested the value of costly rigorous field and handling protocols for sample element analysis using five lichen species. No relaxation of rigour was supported; four relaxed protocols generated data significantly different from rigorous protocols for many of the 20 validated elements. Minimally restrictive site selection criteria gave quality data from 86% of 81 permanent plots in northern Midwest USA; more restrictive criteria would likely reduce indicator reliability. Use of trained non-specialist field collectors was supported when target species choice considers the lichen community context. Evernia mesomorpha, Flavoparmelia caperata and Physcia aipolia/stellaris were successful target species. Non-specialists were less successful at distinguishing Parmelia sulcata and Punctelia rudecta from lookalikes, leading to few samples and some poor quality data.


Geophysics ◽  
2008 ◽  
Vol 73 (2) ◽  
pp. E51-E57 ◽  
Author(s):  
Jack P. Dvorkin

Laboratory data supported by granular-medium and inclusion theories indicate that Poisson’s ratio in gas-saturated sand lies within a range of 0–0.25, with typical values of approximately 0.15. However, some well log measurements, especially in slow gas formations, persistently produce a Poisson’s ratio as large as 0.3. If this measurement is not caused by poor-quality data, three in situ situations — patchy saturation, subresolution thin layering, and elastic anisotropy — provide a plausible explanation. In the patchy saturation situation, the well data must be corrected to produce realistic synthetic seismic traces. In the second and third cases, the effect observed in a well is likely to persist at the seismic scale.


10.28945/2584 ◽  
2002 ◽  
Author(s):  
Herna L. Viktor ◽  
Wayne Motha

Increasingly, large organizations are engaging in data warehousing projects in order to achieve a competitive advantage through the exploration of the information as contained therein. It is therefore paramount to ensure that the data warehouse includes high quality data. However, practitioners agree that the improvement of the quality of data in an organization is a daunting task. This is especially evident in data warehousing projects, which are often initiated “after the fact”. The slightest suspicion of poor quality data often hinders managers from reaching decisions, when they waste hours in discussions to determine what portion of the data should be trusted. Augmenting data warehousing with data mining methods offers a mechanism to explore these vast repositories, enabling decision makers to assess the quality of their data and to unlock a wealth of new knowledge. These methods can be effectively used with inconsistent, noisy and incomplete data that are commonplace in data warehouses.


Sign in / Sign up

Export Citation Format

Share Document