scholarly journals Automatic quality control and quality control schema in the Observation to Archive

Author(s):  
Brenner Silva ◽  
Najmeh Kaffashzadeh ◽  
Erik Nixdorf ◽  
Sebastian Immoor ◽  
Philipp Fischer ◽  
...  

<p>The O2A (Observation to Archive) is a data-flow framework for heterogeneous sources, including multiple institutions and scales of Earth observation. In the O2A, once data transmission is set up, processes are executed to automatically ingest (i.e. collect and harmonize) and quality control data in near real-time. We consider a web-based sensor description application to support transmission and harmonization of observational time-series data. We also consider a product-oriented quality control, where a standardized and scalable approach should integrate the diversity of sensors connected to the framework. A review of literature and observation networks of marine and terrestrial environments is under construction to allow us, for example, to characterize quality tests in use for generic and specific applications. In addition, we use a standardized quality flag scheme to support both user and technical levels of information. In our outlook, a quality score should pair the quality flag to indicate the overall plausibility of each individual data value or to measure the flagging uncertainty. In this work, we present concepts under development and give insights into the data ingest and quality control currently operating within the O2A framework.</p>

2020 ◽  
Vol 17 (3) ◽  
pp. 1
Author(s):  
Angkana Pumpuang ◽  
Anuphao Aobpaet

The land deformation in line of sight (LOS) direction can be measured using time series InSAR. InSAR can successfully measure land subsidence based on LOS in many big cities, including the eastern and western regions of Bangkok which is separated by Chao Phraya River. There are differences in prosperity between both sides due to human activities, land use, and land cover. This study focuses on the land subsidence difference between the western and eastern regions of Bangkok and the most possible cause affecting the land subsidence rates. The Radarsat-2 single look complex (SLC) was used to set up the time series data for long term monitoring. To generate interferograms, StaMPS for Time Series InSAR processing was applied by using the PSI algorithm in DORIS software. It was found that the subsidence was more to the eastern regions of Bangkok where the vertical displacements were +0.461 millimetres and -0.919 millimetres on the western and the eastern side respectively. The districts of Nong Chok, Lat Krabang, and Khlong Samwa have the most extensive farming area in eastern Bangkok. Besides, there were also three major industrial estates located in eastern Bangkok like Lat Krabang, Anya Thani and Bang Chan Industrial Estate. By the assumption of water demand, there were forty-eight wells and three wells found in the eastern and western part respectively. The number of groundwater wells shows that eastern Bangkok has the demand for water over the west, and the pumping of groundwater is a significant factor that causes land subsidence in the area.Keywords: Subsidence, InSAR, Radarsat-2, Bangkok


Author(s):  
Tobias Lampprecht ◽  
David Salb ◽  
Marek Mauser ◽  
Huub van de Wetering ◽  
Michael Burch ◽  
...  

Formula One races provide a wealth of data worth investigating. Although the time-varying data has a clear structure, it is pretty challenging to analyze it for further properties. Here the focus is on a visual classification for events, drivers, as well as time periods. As a first step, the Formula One data is visually encoded based on a line plot visual metaphor reflecting the dynamic lap times, and finally, a classification of the races based on the visual outcomes gained from these line plots is presented. The visualization tool is web-based and provides several interactively linked views on the data; however, it starts with a calendar-based overview representation. To illustrate the usefulness of the approach, the provided Formula One data from several years is visually explored while the races took place in different locations. The chapter discusses algorithmic, visual, and perceptual limitations that might occur during the visual classification of time-series data such as Formula One races.


Author(s):  
José Ramón Cancelo ◽  
Antoni Espasa

The authors elaborate on three basic ideas that should guide the implementation of business intelligence tools. First, the authors advocate for closing the gap between structured information and contextual information. Second, they emphasize the need for adopting the point of view of the organization to assess the relevance of any proposal. In the third place, they remark that any new tool is expected to become a relevant instrument to enhance the learning of the organization and to generate explicit knowledge. To illustrate their point, they discuss how to set up a forecasting support system to predict electricity consumption that converts raw time series data into market intelligence, to meet the needs of a major organization operating at the Spanish electricity markets.


2019 ◽  
Vol 10 ◽  
Author(s):  
Bhusan K. Kuntal ◽  
Chetan Gadgil ◽  
Sharmila S. Mande

The affordability of high throughput DNA sequencing has allowed us to explore the dynamics of microbial populations in various ecosystems. Mathematical modeling and simulation of such microbiome time series data can help in getting better understanding of bacterial communities. In this paper, we present Web-gLV—a GUI based interactive platform for generalized Lotka-Volterra (gLV) based modeling and simulation of microbial populations. The tool can be used to generate the mathematical models with automatic estimation of parameters and use them to predict future trajectories using numerical simulations. We also demonstrate the utility of our tool on few publicly available datasets. The case studies demonstrate the ease with which the current tool can be used by biologists to model bacterial populations and simulate their dynamics to get biological insights. We expect Web-gLV to be a valuable contribution in the field of ecological modeling and metagenomic systems biology.


2012 ◽  
Vol 3 (12) ◽  
pp. 382-388
Author(s):  
Abubakar Muhammed Magaji

Privatization as a reform policy package has been adopted by both developed and developing countries’ economies. Nigeria as a developing country has large public enterprises which has about 57 percent of fixed capital investment and about 66 percent of formal sector employment by 1997. These enterprises performed below expectation due to multiple problems. Technical Committee on Privatization and Commercialization (TCPC) was set up to privatize the enterprises and the privatization have since commenced. The paper reviewed Ashaka cement company performance as a privatized enterprise after privatization. Managers of business organization must have reliable analytical tools for taking a rational decision. Ratio is one of such tools. Time series data from Ashaka Cement Company was used. The performance of the company has improved after privatization.


Today, with an enormous generation and availability of time series data and streaming data, there is an increasing need for an automatic analyzing architecture to get fast interpretations and results. One of the significant potentiality of streaming analytics is to train and model each stream with unsupervised Machine Learning (ML) algorithms to detect anomalous behaviors, fuzzy patterns, and accidents in real-time. If executed reliably, each anomaly detection can be highly valuable for the application. In this paper, we propose a dynamic threshold setting system denoted as Thresh-Learner, mainly for the Internet of Things (IoT) applications that require anomaly detection. The proposed model enables a wide range of real-life applications where there is a necessity to set up a dynamic threshold over the streaming data to avoid anomalies, accidents or sending alerts to distant monitoring stations. We took the major problem of anomalies and accidents in coal mines due to coal fires and explosions. This results in loss of life due to the lack of automated alarming systems. We propose Thresh-Learner, a general purpose implementation for setting dynamic thresholds. We illustrate it through the Smart Helmet for coal mine workers which seamlessly integrates monitoring, analyzing and dynamic thresholds using IoT and analysis on the cloud.


2018 ◽  
Author(s):  
A.A Adnan ◽  
J. Diels ◽  
J.M. Jibrin ◽  
A.Y. Kamara ◽  
P. Craufurd ◽  
...  

AbstractMost crop simulation models require the use of Genotype Specific Parameters (GSPs) which provide the Genotype component of G×E×M interactions. Estimation of GSPs is the most difficult aspect of most modelling exercises because it requires expensive and time-consuming field experiments. GSPs could also be estimated using multi-year and multi locational data from breeder evaluation experiments. This research was set up with the following objectives: i) to determine GSPs of 10 newly released maize varieties for the Nigerian Savannas using data from both calibration experiments and by using existing data from breeder varietal evaluation trials; ii) to compare the accuracy of the GSPs generated using experimental and breeder data; and iii) to evaluate CERES-Maize model to simulate grain and tissue nitrogen contents. For experimental evaluation, 8 different experiments were conducted during the rainy and dry seasons of 2016 across the Nigerian Savanna. Breeder evaluation data was also collected for 2 years and 7 locations. The calibrated GSPs were evaluated using data from a 4 year experiment conducted under varying nitrogen rates (0, 60 and 120kg N ha−1). For the model calibration using experimental data, calculated model efficiency (EF) values ranged between 0.86-0.92 and coefficient of determination (d-index) between 0.92-0.98. Calibration of time-series data produced nRMSE below 7% while all prediction deviations were below 10% of the mean. For breeder experiments, EF (0.52-0.81) and d-index (0.46-0.83) ranges were lower. Prediction deviations were below 17% of the means for all measured variables. Model evaluation using both experimental and breeder trials resulted in good agreement (low RMSE, high EF and d-index values) between observed and simulated grain yields, and tissue and grain nitrogen contents. We conclude that higher calibration accuracy of CERES-Maize model is achieved from detailed experiments. If unavailable, data from breeder experimental trials collected from many locations and planting dates can be used with lower but acceptable accuracy.


2016 ◽  
Vol 11 (4) ◽  
pp. 624-633
Author(s):  
Dylan Keon ◽  
◽  
Cherri M. Pancake ◽  
Ben Steinberg ◽  
Harry Yeh ◽  
...  

In spite of advances in numerical modeling and computer power, coastal buildings and infrastructures are still designed and evaluated for tsunami hazards based on parametric criteria with engineering “conservatism,” largely because complex numerical simulations require time and resources in order to obtain adequate results with sufficient resolution. This is especially challenging when conducting multiple scenarios across a variety of probabilistic occurrences of tsunamis. Numerical computations that have high temporal and spatial resolution also yield extremely large datasets, which are necessary for quantifying uncertainties associated with tsunami hazard evaluation. Here, we introduce a new web-based tool, the Data Explorer, which facilitates the exploration and extraction of numerical tsunami simulation data. The underlying concepts are not new, but the Data Explorer is unique in its ability to retrieve time series data from massive output datasets in less than a second, the fact that it runs in a standard web browser, and its user-centric approach. To demonstrate the tool’s performance and utility, two examples of hypothetical cases are presented. Its usability, together with essentially instantaneous retrieval of data, makes simulation-based analysis and subsequent quantification of uncertainties accessible, enabling a path to future design decisions based on science, rather than relying solely on expert judgment.


HortScience ◽  
1992 ◽  
Vol 27 (10) ◽  
pp. 1129-1131 ◽  
Author(s):  
J.E. Epperson ◽  
M.C. Chien ◽  
W.O. Mizelle

An analysis was conducted using time-series data to identify possible structural change in the farm-gate demand for South Atlantic fresh peaches [Prunus persica (L.) Batsch.]. Structural change was not found in the price-quantity relationship. However, a failing per capita consumption of South Atlantic fresh peaches was found to be associated with an increase in the per capita consumption of fresh fruits in general. Thus, measures such as promotion and advertising, uniform quality control, and cultivar development may increase the demand for South Atlantic fresh peaches.


Sign in / Sign up

Export Citation Format

Share Document