scholarly journals A Sampling Approach for Proactive Project Scheduling under Generalized Time-dependent Workability Uncertainty

2019 ◽  
Vol 64 ◽  
pp. 385-427 ◽  
Author(s):  
Wen Song ◽  
Donghun Kang ◽  
Jie Zhang ◽  
Zhiguang Cao ◽  
Hui Xi

In real-world project scheduling applications, activity durations are often uncertain. Proactive scheduling can effectively cope with the duration uncertainties, by generating robust baseline solutions according to a priori stochastic knowledge. However, most of the existing proactive approaches assume that the duration uncertainty of an activity is not related to its scheduled start time, which may not hold in many real-world scenarios. In this paper, we relax this assumption by allowing the duration uncertainty to be time-dependent, which is caused by the uncertainty of whether the activity can be executed on each time slot. We propose a stochastic optimization model to find an optimal Partial-order Schedule (POS) that minimizes the expected makespan. This model can cover both the time-dependent uncertainty studied in this paper and the traditional time-independent duration uncertainty. To circumvent the underlying complexity in evaluating a given solution, we approximate the stochastic optimization model based on Sample Average Approximation (SAA). Finally, we design two efficient branch-and-bound algorithms to solve the NP-hard SAA problem. Empirical evaluation confirms that our approach can generate high-quality proactive solutions for a variety of uncertainty distributions.

Author(s):  
Mohammad Karimi ◽  
Maryam Miriestahbanati ◽  
Hamed Esmaeeli ◽  
Ciprian Alecsandru

The calibration process for microscopic models can be automatically undertaken using optimization algorithms. Because of the random nature of this problem, the corresponding objectives are not simple concave functions. Accordingly, such problems cannot easily be solved unless a stochastic optimization algorithm is used. In this study, two different objectives are proposed such that the simulation model reproduces real-world traffic more accurately, both in relation to longitudinal and lateral movements. When several objectives are defined for an optimization problem, one solution method may aggregate the objectives into a single-objective function by assigning weighting coefficients to each objective before running the algorithm (also known as an a priori method). However, this method does not capture the information exchange among the solutions during the calibration process, and may fail to minimize all the objectives at the same time. To address this limitation, an a posteriori method (multi-objective particle swarm optimization, MOPSO) is employed to calibrate a microscopic simulation model in one single step while minimizing the objectives functions simultaneously. A set of traffic data collected by video surveillance is used to simulate a real-world highway in VISSIM. The performance of the a posteriori-based MOPSO in the calibration process is compared with a priori-based optimization methods such as particle swarm optimization, genetic algorithm, and whale optimization algorithm. The optimization methodologies are implemented in MATLAB and connected to VISSIM using its COM interface. Based on the validation results, the a posteriori-based MOPSO leads to the most accurate solutions among the tested algorithms with respect to both objectives.


2014 ◽  
Vol 2014 ◽  
pp. 1-15
Author(s):  
Yu Zhang ◽  
Jiafu Tang ◽  
Shimeng Lv ◽  
Xinggang Luo

We consider an ad hoc Floyd-A∗algorithm to determine the a priori least-time itinerary from an origin to a destination given an initial time in an urban scheduled public transport (USPT) network. The network is bimodal (i.e., USPT lines and walking) and time dependent. The modified USPT network model results in more reasonable itinerary results. An itinerary is connected through a sequence of time-label arcs. The proposed Floyd-A∗algorithm is composed of two procedures designated as Itinerary Finder and Cost Estimator. The A∗-based Itinerary Finder determines the time-dependent, least-time itinerary in real time, aided by the heuristic information precomputed by the Floyd-based Cost Estimator, where a strategy is formed to preestimate the time-dependent arc travel time as an associated static lower bound. The Floyd-A∗algorithm is proven to guarantee optimality in theory and, demonstrated through a real-world example in Shenyang City USPT network to be more efficient than previous procedures. The computational experiments also reveal the time-dependent nature of the least-time itinerary. In the premise that lines run punctually, “just boarding” and “just missing” cases are identified.


2020 ◽  
Vol 26 (9) ◽  
pp. 1928-1950
Author(s):  
S.N. Yashin ◽  
Yu.V. Trifonov ◽  
E.V. Koshelev

Subject. This article deals with the simulation technologies based on the principles of stochastic optimization. They can bring a significant financial effect in the planning of investment development of both individual innovation and industrial clusters and federal districts of the country. Objectives. The article aims to investigate the mechanisms of inter-cluster cooperation within a single district. Methods. For the analysis, we used a stochastic optimization model in view of economic, financial, information, and logistics inter-cluster cooperation within a single federal district. Results. The considered stochastic optimization model of economic, financial, information, and logistics inter-cluster cooperation shows that the increase in fixed investment does not always cause population growth in the federal district regions. Conclusions. The use of a digital twin mechanism of inter-cluster cooperation can help avoid premature unreasonable public policy management decisions regarding the further development of innovation and industrial clusters.


Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 380
Author(s):  
Emanuele Cavenaghi ◽  
Gabriele Sottocornola ◽  
Fabio Stella ◽  
Markus Zanker

The Multi-Armed Bandit (MAB) problem has been extensively studied in order to address real-world challenges related to sequential decision making. In this setting, an agent selects the best action to be performed at time-step t, based on the past rewards received by the environment. This formulation implicitly assumes that the expected payoff for each action is kept stationary by the environment through time. Nevertheless, in many real-world applications this assumption does not hold and the agent has to face a non-stationary environment, that is, with a changing reward distribution. Thus, we present a new MAB algorithm, named f-Discounted-Sliding-Window Thompson Sampling (f-dsw TS), for non-stationary environments, that is, when the data streaming is affected by concept drift. The f-dsw TS algorithm is based on Thompson Sampling (TS) and exploits a discount factor on the reward history and an arm-related sliding window to contrast concept drift in non-stationary environments. We investigate how to combine these two sources of information, namely the discount factor and the sliding window, by means of an aggregation function f(.). In particular, we proposed a pessimistic (f=min), an optimistic (f=max), as well as an averaged (f=mean) version of the f-dsw TS algorithm. A rich set of numerical experiments is performed to evaluate the f-dsw TS algorithm compared to both stationary and non-stationary state-of-the-art TS baselines. We exploited synthetic environments (both randomly-generated and controlled) to test the MAB algorithms under different types of drift, that is, sudden/abrupt, incremental, gradual and increasing/decreasing drift. Furthermore, we adapt four real-world active learning tasks to our framework—a prediction task on crimes in the city of Baltimore, a classification task on insects species, a recommendation task on local web-news, and a time-series analysis on microbial organisms in the tropical air ecosystem. The f-dsw TS approach emerges as the best performing MAB algorithm. At least one of the versions of f-dsw TS performs better than the baselines in synthetic environments, proving the robustness of f-dsw TS under different concept drift types. Moreover, the pessimistic version (f=min) results as the most effective in all real-world tasks.


2013 ◽  
Vol 29 (4) ◽  
pp. 511-537 ◽  
Author(s):  
Jeroen Pannekoek ◽  
Sander Scholtus ◽  
Mark Van der Loo

Abstract Data editing is arguably one of the most resource-intensive processes at NSIs. Forced by everincreasing budget pressure, NSIs keep searching for more efficient forms of data editing. Efficiency gains can be obtained by selective editing, that is, limiting the manual editing to influential errors, and by automating the editing process as much as possible. In our view, an optimal mix of these two strategies should be aimed for. In this article we present a decomposition of the overall editing process into a number of different tasks and give an upto- date overview of all the possibilities of automatic editing in terms of these tasks. During the design of an editing process, this decomposition may be helpful in deciding which tasks can be done automatically and for which tasks (additional) manual editing is required. Such decisions can be made a priori, based on the specific nature of the task, or by empirical evaluation, which is illustrated by examples. The decomposition in tasks, or statistical functions, also naturally leads to reuseable components, resulting in efficiency gains in process design.


Sign in / Sign up

Export Citation Format

Share Document