scholarly journals Initial Results of Quantification of Model Validation Results Using Modal Analysis

Author(s):  
Urmila Agrawal ◽  
Pavel Etingov ◽  
Renke Huang

<div>High quality generator dynamic models are critical to reliable and accurate power systems studies and planning. With the availability of PMU measurements, measurement-based approach for model validation has gained significant prominence. Currently, the model validation results are analyzed by visually comparing real–world PMU measurements with the model-based simulated data. This paper proposes metrics to quantify the generator dynamic model validation results based on the response of generators to each system mode, which includes both local and inter-area, using modal analysis approach. The metrics provide information on the inaccuracy associated with the model in terms of the characteristics of each mode. Initial results obtained using the real-world data validates the effectiveness of the proposed metrics. In this paper, modal analysis was carried out using Prony method.</div>

2020 ◽  
Author(s):  
Urmila Agrawal ◽  
Pavel Etingov ◽  
Renke Huang

<div>High quality generator dynamic models are critical to reliable and accurate power systems studies and planning. With the availability of PMU measurements, measurement-based approach for model validation has gained significant prominence. Currently, the model validation results are analyzed by visually comparing real–world PMU measurements with the model-based simulated data. This paper proposes metrics to quantify the generator dynamic model validation results based on the response of generators to each system mode, which includes both local and inter-area, using modal analysis approach. The metrics provide information on the inaccuracy associated with the model in terms of the characteristics of each mode. Initial results obtained using the real-world data validates the effectiveness of the proposed metrics. In this paper, modal analysis was carried out using Prony method.</div>


2020 ◽  
Author(s):  
Urmila Agrawal ◽  
Pavel Etingov ◽  
Renke Huang

<pre>High quality generator dynamic models are critical to reliable and accurate power systems studies and planning. With the availability of PMU measurements, measurement-based approach for model validation has gained significant prominence. Currently, the model validation results are analyzed by visually comparing real--world PMU measurements with the model-based response measurements, and parameter adjustments rely mostly on engineering experience. This paper proposes advanced performance metrics to systematically quantify the generator dynamic model validation results by separately taking into consideration slow governor response and comparatively fast oscillatory response. The performance metric for governor response is based on the step response characteristics of a system and the metric for oscillatory response is based on the response of generator to each system mode calculated using modal analysis. The proposed metrics in this paper is aimed at providing critical information to help with the selection of parameters to be tuned for model calibration by performing enhanced sensitivity analysis, and also help with rule-based model calibration. Results obtained using both simulated and real-world measurements validate the effectiveness of the proposed performance metrics and sensitivity analysis for model validation and calibration.</pre>


2021 ◽  
Author(s):  
Urmila Agrawal ◽  
Pavel Etingov ◽  
Renke Huang

<pre>High quality generator dynamic models are critical to reliable and accurate power systems studies and planning. With the availability of PMU measurements, measurement-based approach for model validation has gained significant prominence. Currently, the model validation results are analyzed by visually comparing real--world PMU measurements with the model-based response measurements, and parameter adjustments rely mostly on engineering experience. This paper proposes advanced performance metrics to systematically quantify the generator dynamic model validation results by separately taking into consideration slow governor response and comparatively fast oscillatory response. The performance metric for governor response is based on the step response characteristics of a system and the metric for oscillatory response is based on the response of generator to each system mode calculated using modal analysis. The proposed metrics in this paper is aimed at providing critical information to help with the selection of parameters to be tuned for model calibration by performing enhanced sensitivity analysis, and also help with rule-based model calibration. Results obtained using both simulated and real-world measurements validate the effectiveness of the proposed performance metrics and sensitivity analysis for model validation and calibration.</pre>


2020 ◽  
Author(s):  
Urmila Agrawal ◽  
Pavel Etingov ◽  
Renke Huang

<pre>High quality generator dynamic models are critical to reliable and accurate power systems studies and planning. With the availability of PMU measurements, measurement-based approach for model validation has gained significant prominence. Currently, the model validation results are analyzed by visually comparing real--world PMU measurements with the model-based response measurements, and parameter adjustments rely mostly on engineering experience. This paper proposes advanced performance metrics to systematically quantify the generator dynamic model validation results by separately taking into consideration slow governor response and comparatively fast oscillatory response. The performance metric for governor response is based on the step response characteristics of a system and the metric for oscillatory response is based on the response of generator to each system mode calculated using modal analysis. The proposed metrics in this paper is aimed at providing critical information to help with the selection of parameters to be tuned for model calibration by performing enhanced sensitivity analysis, and also help with rule-based model calibration. Results obtained using both simulated and real-world measurements validate the effectiveness of the proposed performance metrics and sensitivity analysis for model validation and calibration.</pre>


Author(s):  
Marcelo N. de Sousa ◽  
Ricardo Sant’Ana ◽  
Rigel P. Fernandes ◽  
Julio Cesar Duarte ◽  
José A. Apolinário ◽  
...  

AbstractIn outdoor RF localization systems, particularly where line of sight can not be guaranteed or where multipath effects are severe, information about the terrain may improve the position estimate’s performance. Given the difficulties in obtaining real data, a ray-tracing fingerprint is a viable option. Nevertheless, although presenting good simulation results, the performance of systems trained with simulated features only suffer degradation when employed to process real-life data. This work intends to improve the localization accuracy when using ray-tracing fingerprints and a few field data obtained from an adverse environment where a large number of measurements is not an option. We employ a machine learning (ML) algorithm to explore the multipath information. We selected algorithms random forest and gradient boosting; both considered efficient tools in the literature. In a strict simulation scenario (simulated data for training, validating, and testing), we obtained the same good results found in the literature (error around 2 m). In a real-world system (simulated data for training, real data for validating and testing), both ML algorithms resulted in a mean positioning error around 100 ,m. We have also obtained experimental results for noisy (artificially added Gaussian noise) and mismatched (with a null subset of) features. From the simulations carried out in this work, our study revealed that enhancing the ML model with a few real-world data improves localization’s overall performance. From the machine ML algorithms employed herein, we also observed that, under noisy conditions, the random forest algorithm achieved a slightly better result than the gradient boosting algorithm. However, they achieved similar results in a mismatch experiment. This work’s practical implication is that multipath information, once rejected in old localization techniques, now represents a significant source of information whenever we have prior knowledge to train the ML algorithm.


2020 ◽  
pp. 001316442092656
Author(s):  
Yutian T. Thompson ◽  
Hairong Song ◽  
Dexin Shi ◽  
Zhengkui Liu

Conventional approaches for selecting a reference indicator (RI) could lead to misleading results in testing for measurement invariance (MI). Several newer quantitative methods have been available for more rigorous RI selection. However, it is still unknown how well these methods perform in terms of correctly identifying a truly invariant item to be an RI. Thus, Study 1 was designed to address this issue in various conditions using simulated data. As a follow-up, Study 2 further investigated the advantages/disadvantages of using RI-based approaches for MI testing in comparison with non-RI-based approaches. Altogether, the two studies provided a solid examination on how RI matters in MI tests. In addition, a large sample of real-world data was used to empirically compare the uses of the RI selection methods as well as the RI-based and non-RI-based approaches for MI testing. In the end, we offered a discussion on all these methods, followed by suggestions and recommendations for applied researchers.


2020 ◽  
Author(s):  
Alexander P. Christensen ◽  
Luis Eduardo Garrido ◽  
Hudson Golino

One common approach for constructing tests that measure a single attribute is the semantic similarity approach where items vary slightly in their wording and content. Despite being an effective strategy for ensuring high internal consistency, the information in tests may become redundant or worse confound the interpretation of the test scores. With the advent of network models, where tests represent a complex system and components (usually items) represent causally autonomous features, redundant variables may have inadvertent effects on the interpretation of their metrics. These issues motivated the development of a novel approach called Unique Variable Analysis (UVA), which detects redundant variables in multivariate data. The goal of UVA is to statistically identify potential redundancies in multivariate data so that researchers can make decisions about how best to handle them. Using a Monte Carlo simulation approach, we generated multivariate data with redundancies that were based on examples of known real-world redundancies. We then demonstrate the effects that redundancy can have on the accurate estimation of dimensions. Next, we evaluated UVA’s ability to detect redundant variables in the simulated data. Based on these results, we provide a tutorial for how to apply UVA to real-world data. Our example data demonstrate that redundant variables create inaccurate estimates of dimensional structure but after applying UVA, the expected structure can be recovered. In sum, our study suggests that redundancy can have substantial effects on validity if left unchecked and that redundancy assessment should be integrated into standard validation practices.


2021 ◽  
Author(s):  
Robert A Player ◽  
Angeline M Aguinaldo ◽  
Brian B Merritt ◽  
Lisa N Maszkiewicz ◽  
Oluwaferanmi E Adeyemo ◽  
...  

A major challenge in the field of metagenomics is the selection of the correct combination of sequencing platform and downstream metagenomic analysis algorithm, or classifier. Here, we present the Metagenomic Evaluation Tool Analyzer (META), which produces simulated data and facilitates platform and algorithm selection for any given metagenomic use case. META-generated in silico read data are modular, scalable, and reflect user-defined community profiles, while the downstream analysis is done using a variety of metagenomic classifiers. Reported results include information on resource utilization, time-to-answer, and performance. Real-world data can also be analyzed using selected classifiers and results benchmarked against simulations. To test the utility of the META software, simulated data was compared to real-world viral and bacterial metagenomic samples run on four different sequencers and analyzed using 12 metagenomic classifiers. Lastly, we introduce META Score: a unified, quantitative value which rates an analytic classifiers' ability to both identify and count taxa in a representative sample.


Sign in / Sign up

Export Citation Format

Share Document