scholarly journals Scenario generation by selection from historical data

Author(s):  
Michal Kaut

AbstractIn this paper, we present and compare several methods for generating scenarios for stochastic-programming models by direct selection from historical data. The methods range from standard sampling and k-means, through iterative sampling-based selection methods, to a new moment-based optimization approach. We compare the models on a simple portfolio-optimization model and show how to use them in a situation when we are selecting whole sequences from the data, instead of single data points.

Mathematics ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 355 ◽  
Author(s):  
Jens Jauch ◽  
Felix Bleimund ◽  
Michael Frey ◽  
Frank Gauterin

The B-spline function representation is commonly used for data approximation and trajectory definition, but filter-based methods for NWLS approximation are restricted to a bounded definition range. We present an algorithm termed NRBA for an iterative NWLS approximation of an unbounded set of data points by a B-spline function. NRBA is based on a MPF, in which a KF solves the linear subproblem optimally while a PF deals with nonlinear approximation goals. NRBA can adjust the bounded definition range of the approximating B-spline function during run-time such that, regardless of the initially chosen definition range, all data points can be processed. In numerical experiments, NRBA achieves approximation results close to those of the Levenberg–Marquardt algorithm. An NWLS approximation problem is a nonlinear optimization problem. The direct trajectory optimization approach also leads to a nonlinear problem. The computational effort of most solution methods grows exponentially with the trajectory length. We demonstrate how NRBA can be applied for a multiobjective trajectory optimization for a BEV in order to determine an energy-efficient velocity trajectory. With NRBA, the effort increases only linearly with the processed data points and the trajectory length.


Author(s):  
Eliot Rudnick-Cohen ◽  
Jeffrey W. Herrmann ◽  
Shapour Azarm

Feasibility robust optimization techniques solve optimization problems with uncertain parameters that appear only in their constraint functions. Solving such problems requires finding an optimal solution that is feasible for all realizations of the uncertain parameters. This paper presents a new feasibility robust optimization approach involving uncertain parameters defined on continuous domains without any known probability distributions. The proposed approach integrates a new sampling-based scenario generation scheme with a new scenario reduction approach in order to solve feasibility robust optimization problems. An analysis of the computational cost of the proposed approach was performed to provide worst case bounds on its computational cost. The new proposed approach was applied to three test problems and compared against other scenario-based robust optimization approaches. A test was conducted on one of the test problems to demonstrate that the computational cost of the proposed approach does not significantly increase as additional uncertain parameters are introduced. The results show that the proposed approach converges to a robust solution faster than conventional robust optimization approaches that discretize the uncertain parameters.


2020 ◽  
Vol 163 (3) ◽  
pp. 1267-1285 ◽  
Author(s):  
Jens Kiesel ◽  
Philipp Stanzel ◽  
Harald Kling ◽  
Nicola Fohrer ◽  
Sonja C. Jähnig ◽  
...  

AbstractThe assessment of climate change and its impact relies on the ensemble of models available and/or sub-selected. However, an assessment of the validity of simulated climate change impacts is not straightforward because historical data is commonly used for bias-adjustment, to select ensemble members or to define a baseline against which impacts are compared—and, naturally, there are no observations to evaluate future projections. We hypothesize that historical streamflow observations contain valuable information to investigate practices for the selection of model ensembles. The Danube River at Vienna is used as a case study, with EURO-CORDEX climate simulations driving the COSERO hydrological model. For each selection method, we compare observed to simulated streamflow shift from the reference period (1960–1989) to the evaluation period (1990–2014). Comparison against no selection shows that an informed selection of ensemble members improves the quantification of climate change impacts. However, the selection method matters, with model selection based on hindcasted climate or streamflow alone is misleading, while methods that maintain the diversity and information content of the full ensemble are favorable. Prior to carrying out climate impact assessments, we propose splitting the long-term historical data and using it to test climate model performance, sub-selection methods, and their agreement in reproducing the indicator of interest, which further provide the expectable benchmark of near- and far-future impact assessments. This test is well-suited to be applied in multi-basin experiments to obtain better understanding of uncertainty propagation and more universal recommendations regarding uncertainty reduction in hydrological impact studies.


2011 ◽  
Vol 54 (2) ◽  
pp. 75-84 ◽  
Author(s):  
Sandeep Kalelkar ◽  
Jay Postlewaite

Cleanroom wipers have long played an indispensable role in managing contamination in controlled environments. From wiping residues on hard surfaces to applying cleaning solutions, wipers perform a variety of tasks that help maintain the cleanliness levels desired in a given cleanroom environment. This makes the selection of cleanroom wipers a critical decision in any controlled environment. One common way to distinguish between cleanroom wipers of similar structural design is to compare test results across a variety of criteria, according to recommended practices by organizations such as the IEST. However, these results are typically listed as single data points for a given test and are meant to indicate either "typical values," or even target specifications, in some instances. This approach is inherently limited and ineffective in assessing the true levels of cleanliness of a given wiper product. In this study, we review the test methods that are used to evaluate cleanroom wipers and present a new and improved approach by which users can evaluate their cleanliness. We provide a framework by which the consistency of the cleanliness of cleanroom wipers can be assessed in a statistically relevant manner. Finally, we demonstrate the value of using consistency of test results rather than a singular test result as the true measure of wiper quality.


2019 ◽  
Vol 8 (3) ◽  
pp. 5630-5634

In artificial intelligence related applications such as bio-medical, bio-informatics, data clustering is an important and complex task with different situations. Prototype based clustering is the reasonable and simplicity to describe and evaluate data which can be treated as non-vertical representation of relational data. Because of Barycentric space present in prototype clustering, maintain and update the structure of the cluster with different data points is still challenging task for different data points in bio-medical relational data. So that in this paper we propose and introduce A Novel Optimized Evidential C-Medoids (NOEC) which is relates to family o prototype based clustering approach for update and proximity of medical relational data. We use Ant Colony Optimization approach to enable the services of similarity with different features for relational update cluster medical data. Perform our approach on different bio-medical related synthetic data sets. Experimental results of proposed approach give better and efficient results with comparison of different parameters in terms of accuracy and time with processing of medical relational data sets.


2018 ◽  
Author(s):  
Julien Riou ◽  
Chiara Poletto ◽  
Pierre-Yves Boëlle

AbstractModel-based epidemiological assessment is useful to support decision-making at the beginning of an emerging Aedes-transmitted outbreak. However, early forecasts are generally unreliable as little information is available in the first few incidence data points. Here, we show how past Aedes-transmitted epidemics help improve these predictions. The approach was applied to the 2015-2017 Zika virus epidemics in three islands of the French West Indies, with historical data including other Aedes-transmitted diseases (Chikungunya and Zika) in the same and other locations. Hierarchical models were used to build informative a priori distributions on the reproduction ratio and the reporting rates. The accuracy and sharpness of forecasts improved substantially when these a priori distributions were used in models for prediction. For example, early forecasts of final epidemic size obtained without historical information were 3.3 times too high on average (range: 0.2 to 5.8) with respect to the eventual size, but were far closer (1.1 times the real value on average, range: 0.4 to 1.5) using information on past CHIKV epidemics in the same places. Likewise, the 97.5% upper bound for maximal incidence was 15.3 times (range: 2.0 to 63.1) the actual peak incidence, and became much sharper at 2.4 times (range: 1.3 to 3.9) the actual peak incidence with informative a priori distributions. Improvements were more limited for the date of peak incidence and the total duration of the epidemic. The framework can adapt to all forecasting models at the early stages of emerging Aedes-transmitted outbreaks.


2021 ◽  
Author(s):  
Amila Indika ◽  
Nethmal Warusamana ◽  
Erantha Welikala ◽  
Sampath Deegalla

Data for this research was gathered from online available sources from the NASDAQ American stock exchange.<div>We gathered data for most active 20 companies and 10 years of historical data from 21st September 2019 backwards. We used 40044 data points in total.</div>


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Andrew Shaffer ◽  
Elizabeth Rahn ◽  
Kenneth Saag ◽  
Amy Mudano ◽  
Angelo Gaffo

Abstract Background Previous studies have noted significant variation in serum urate (sUA) levels, and it is unknown how this influences the accuracy of hyperuricemia classification based on single data points. Despite this known variability, hyperuricemic patients are often used as a control group in gout studies. Our objective was to determine the accuracy of hyperuricemia classifications based on single data points versus multiple data points given the degree of variability observed with serial measurements of sUA. Methods Data was analyzed from a cross-over clinical trial of urate-lowering therapy in young adults without a gout diagnosis. In the control phase, sUA levels used for this analysis were collected at 2–4 week intervals. Mean coefficient of variation for sUA was determined, as were rates of conversion between normouricemia (sUA ≤6.8 mg/dL) and hyperuricemia (sUA > 6.8 mg/dL). Results Mean study participant (n = 85) age was 27.8 ± 7.0 years, with 39% female participants and 41% African-American participants. Mean sUA coefficient of variation was 8.5% ± 4.9% (1 to 23%). There was no significant difference in variation between men and women, or between participants initially normouricemic and those who were initially hyperuricemic. Among those initially normouricemic (n = 72), 21% converted to hyperuricemia during at least one subsequent measurement. The subgroup with initial sUA < 6.0 (n = 54) was much less likely to have future values in the range of hyperuricemia compared to the group with screening sUA values between 6.0–6.8 (n = 18) (7% vs 39%, p = 0.0037). Of the participants initially hyperuricemic (n = 13), 46% were later normouricemic during at least one measurement. Conclusion Single sUA measurements were unreliable in hyperuricemia classification due to spontaneous variation. Knowing this, if a single measurement must be used in classification, it is worth noting that those with an sUA of < 6.0 mg/dL were less likely to demonstrate future hyperuricemic measurements and this could be considered a safer threshold to rule out intermittent hyperuricemia based on a single measurement point. Trial registration Data from parent study ClinicalTrials.gov Identifier: NCT02038179.


Sign in / Sign up

Export Citation Format

Share Document