A multiobjective integrated multiproject scheduling and multiskilled workforce assignment model considering learning effect under uncertainty

2019 ◽  
Vol 36 (1) ◽  
pp. 276-296 ◽  
Author(s):  
Milad Hematian ◽  
Mir Mehdi Seyyed Esfahani ◽  
Iraj Mahdavi ◽  
Nezam Mahdavi‐Amiri ◽  
Javad Rezaeian
1968 ◽  
Author(s):  
Persis T. Sturges ◽  
Patricia L. Donaldson ◽  
Emmett G. Anderson

2014 ◽  
Vol 13 (8) ◽  
pp. 4723-4728
Author(s):  
Pratiksha Saxena ◽  
Smt. Anjali

In this paper, an integrated simulation optimization model for the assignment problems is developed. An effective algorithm is developed to evaluate and analyze the back-end stored simulation results. This paper proposes simulation tool SIMASI (Simulation of assignment models) to simulate assignment models. SIMASI is a tool which simulates and computes the results of different assignment models. This tool is programmed in DOT.NET and is based on analytical approach to guide optimization strategy. Objective of this paper is to provide a user friendly simulation tool which gives optimized assignment model results. Simulation is carried out by providing the required values of matrix for resource and destination requirements and result is stored in the database for further comparison and study. Result is obtained in terms of the performance measurements of classical models of assignment system. This simulation tool is interfaced with an optimization procedure based on classical models of assignment system. The simulation results are obtained and analyzed rigorously with the help of numerical examples. 


2020 ◽  
Author(s):  
Kate Ergo ◽  
Luna De Vilder ◽  
Esther De Loof ◽  
Tom Verguts

Recent years have witnessed a steady increase in the number of studies investigating the role of reward prediction errors (RPEs) in declarative learning. Specifically, in several experimental paradigms RPEs drive declarative learning; with larger and more positive RPEs enhancing declarative learning. However, it is unknown whether this RPE must derive from the participant’s own response, or whether instead any RPE is sufficient to obtain the learning effect. To test this, we generated RPEs in the same experimental paradigm where we combined an agency and a non-agency condition. We observed no interaction between RPE and agency, suggesting that any RPE (irrespective of its source) can drive declarative learning. This result holds implications for declarative learning theory.


Sign in / Sign up

Export Citation Format

Share Document