prescriptive model
Recently Published Documents


TOTAL DOCUMENTS

76
(FIVE YEARS 17)

H-INDEX

9
(FIVE YEARS 2)

2022 ◽  
pp. 1444-1457
Author(s):  
Harish Maringanti

Framing a technology question as a simple choice between developing an in- house application system and off-the- shelf proprietary system, or simply put, as a choice between build and buy, runs the risk of ignoring myriad options available in between the two extremes. In this era of cloud computing and run anything-as- a-service model, the very notion of developing an in-house application would raise a few eyebrows among C- level executives. How then can academic libraries, under mounting pressure to demonstrate their value (Oakleaf, 2010), justify investments in software development in particular? What follows in these sections is a brief discussion on the importance of investing in software development in libraries, three mini-case studies demonstrating the wide possibilities of integrating software development in library operations and a non- prescriptive model to assess which projects may be worth pursuing from the software development standpoint.


2021 ◽  
Vol 5 (CHI PLAY) ◽  
pp. 1-21
Author(s):  
Lena Fanya Aeschbach ◽  
Sebastian A.C. Perrig ◽  
Lorena Weder ◽  
Klaus Opwis ◽  
Florian Brühlmann

Measuring theoretical concepts, so-called constructs, is a central challenge of Player Experience research. Building on recent work in HCI and psychology, we conducted a systematic literature review to study the transparency of measurement reporting. We accessed the ACM Digital Library to analyze all 48 full papers published at CHI PLAY 2020, of those, 24 papers used self-report measurements and were included in the full review. We assessed specifically, whether researchers reported What, How and Why they measured. We found that researchers matched their measures to the construct under study and that administrative details, such as number of points on a Likert-type scale, were frequently reported. However, definitions of the constructs to be measured and justifications for selecting a particular scale were sparse. Lack of transparency in these areas threaten the validity of singular studies, but further compromise the building of theories and accumulation of research knowledge in meta-analytic work. This work is limited to only assessing the current transparency of measurement reporting at CHI PLAY 2020, however we argue this constitutes a fair foundation to assess potential pitfalls. To address these pitfalls, we propose a prescriptive model of a measurement selection process, which aids researchers to systematically define their constructs, specify operationalizations, and justify why these measures were chosen. Future research employing this model should contribute to more transparency in measurement reporting. The research was funded through internal resources. All materials are available on https://osf.io/4xz2v/.


Author(s):  
David Bergman ◽  
Teng Huang ◽  
Philip Brooks ◽  
Andrea Lodi ◽  
Arvind U. Raghunathan

Business research practice is witnessing a surge in the integration of predictive modeling and prescriptive analysis. We describe a modeling framework JANOS that seamlessly integrates the two streams of analytics, allowing researchers and practitioners to embed machine learning models in an end-to-end optimization framework. JANOS allows for specifying a prescriptive model using standard optimization modeling elements such as constraints and variables. The key novelty lies in providing modeling constructs that enable the specification of commonly used predictive models within an optimization model, have the features of the predictive model as variables in the optimization model, and incorporate the output of the predictive models as part of the objective. The framework considers two sets of decision variables: regular and predicted. The relationship between the regular and the predicted variables is specified by the user as pretrained predictive models. JANOS currently supports linear regression, logistic regression, and neural network with rectified linear activation functions. In this paper, we demonstrate the flexibility of the framework through an example on scholarship allocation in a student enrollment problem and provide a numeric performance evaluation. Summary of Contribution. This paper describes a new software tool, JANOS, that integrates predictive modeling and discrete optimization to assist decision making. Specifically, the proposed solver takes as input user-specified pretrained predictive models and formulates optimization models directly over those predictive models by embedding them within an optimization model through linear transformations.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Fábio Lotti Oliva ◽  
Andrei Carlos Torresani Paza ◽  
Jefferson Luiz Bution ◽  
Masaaki Kotabe ◽  
Peter Kelle ◽  
...  

Purpose This study aims to investigate the risks associated with managing the dispersed knowledge in inter-organizational arrangements for innovation. Specifically, it proposes a model to analyze the knowledge management risks in open innovation, applied in four steps. Design/methodology/approach Initially, the authors carried out a systematic literature review (SLR) on the concepts that connect knowledge management, inter-organizational arrangements for innovation and risks. The SLR results led to a complementary theoretical review on the conceptual elements in question. Based on the findings, the authors have developed a model to analyze the knowledge management risks in open innovation, which was validated by experts. It was then studied the case of GOL Airlines, a company that uses innovation to overcome the paradox between low-cost and full service in the commercial air transportation industry, considering the application and adjustment of the proposed model. Findings Open innovation is one of the inter-organizational arrangement types most applied in the context of innovation. Relations between agents are the primary sources of risks when managing the dispersed knowledge in these arrangements. The authors have found five main risks associated, namely, risk of the innovative effort does not reach the expected objective, risk of knowledge transfer being ineffective, risk of misappropriation of value, risk of dependency (lock-in) and risk of relations. Practical implications The practical implication is the proposition of a procedure for applying the model to analyze the knowledge management risks in open innovation, which makes it a prescriptive model for identifying risks. The proposed model is described in four steps, namely, to identify the agents in the environment of the value of open innovation; to identify the types of relations of each agent; to consider the barriers to knowledge management in innovation; and to assess the risks considering the possibilities derived from the agents, their relationships and the barriers. The model is applied in the GOL case and the results are presented. Originality/value First, it uses a novel approach to investigate open innovation while studying its risks. This approach considers the knowledge is dispersed and flows from one organization to another through a combination of relations inside the environment of value where the open innovation materializes. Second, it contributes to theory development by opening a research front that fuses four areas: risk management, knowledge management, innovation and inter-organizational arrangements. Third, this paper proposes a theoretical model and presents its operationalization. The study aims to make an impact beyond academia and uses a case study to illustrate the model application in a real and interesting open innovation project to support the business model at GOL Airlines.


2021 ◽  
Author(s):  
Lena Fanya Aeschbach ◽  
Sebastian Andrea Caesar Perrig ◽  
Lorena Weder ◽  
Klaus Opwis ◽  
Florian Brühlmann

Measuring theoretical concepts, so-called constructs, is a central challenge of Player Experience research. Building on recent work in HCI and psychology, we conducted a systematic literature review to study the transparency of measurement reporting. We accessed the ACM Digital Library to analyze all 48 full papers published at CHI PLAY 2020, of those, 24 papers used self-report measurements and were included in the full review. We assessed specifically, whether researchers reported What, How and Why they measured. We found that researchers matched their measures to the construct under study and that administrative details, such as number of points on a Likert-type scale, were frequently reported. However, definitions of the constructs to be measured and justifications for selecting a particular scale were sparse. Lack of transparency in these areas threaten the validity of singular studies, but further compromise the building of theories and accumulation of research knowledge in meta-analytic work. This work is limited to only assessing the current transparency of measurement reporting at CHI PLAY 2020, however we argue this constitutes a fair foundation to asses potential pitfalls. To address these pitfalls, we propose a prescriptive model of a measurement selection process, which aids researchers to systematically define their constructs, specify operationalizations, and justify why these measures were chosen. Future research employing this model should contribute to more transparency in measurement reporting. The research was funded through internal resources. All materials are available on https://osf.io/4xz2v/.


Author(s):  
Deepali Bajaj ◽  
Urmil Bharti ◽  
Anita Goel ◽  
S. C. Gupta

Microservices architectural style is gaining popularity in industry and is being widely adopted by large corporations like Amazon, Netflix, Spotify, eBay, and many more. Several other organizations are also preferring to migrate their existing enterprise scale applications to microservices architecture. Researchers have proposed various approaches for microservices decomposition to be used in migrating or rebuilding a monolithic application to microservices. Applying any available approach to an existing monolithic application is not a straightforward decision; thus, there is a need for guidelines that assist in the migration process. There are various challenges in a migration process because different migration approaches use different sets of input data to identify microservices. Since the available migration techniques are not structured, logically, selection of an appropriate migration strategy is a difficult decision for any system architect. So, it is a recurrent open research question – which migration technique should be adopted to get microservices for a legacy monolithic application? This paper addresses this research challenge by examining existing approaches for microservices migration and groups them based on software development life cycle (SDLC) artifacts. Our research also proposes a microservices prescriptive model (MPM) from the existing prominent microservice migration techniques. This model provides recommendation (1) for refactoring an existing legacy system to microservices, and (2) for new microservices development projects. Our study also helps in gaining more insight about greenfield and brownfield development approaches in microservices applications. Moreover, researchers and practitioners of the field can benefit from this model to further validate their migration approaches based on the available system artifacts.


Author(s):  
Timothe Langlois-Therien ◽  
Brian Dewar ◽  
Ross Upshur ◽  
Michel Shamy

Evidence-Based Medicine proposes a prescriptive model of physician decision-making in which “best evidence” is used to guide best practice. And yet, proponents of EBM acknowledge that EBM fails to offer a systematic theory of physician decision-making. In this paper, we explore how physicians from the neurology and emergency medicine communities have responded to an evolving body of evidence surrounding the acute treatment of patients with ischemic stroke. Through analysis of this case study, we argue that EBM’s vision of evidence-based medical decision-making fails to appreciate a process that we have termed epistemic evaluation. Namely, physicians are required to interpret and apply any knowledge — even what EBM would term “best evidence” — in light of their own knowledge, background and experience. This is consequential for EBM as understanding what physicians do and why they do it would appear to be essential to achieving optimal practice in accordance with best evidence.


Author(s):  
Dimitris Bertsimas ◽  
Joshua Ivanhoe ◽  
Alexandre Jacquillat ◽  
Michael Li ◽  
Alessandro Previero ◽  
...  

AbstractThe outbreak of COVID-19 has spurred extensive research worldwide to develop a vaccine. However, when a vaccine becomes available, limited production and distribution capabilities will likely lead to another challenge: who to prioritize for vaccination to mitigate the near-end impact of the pandemic? To tackle that question, this paper first expands a state-of-the-art epidemiological model, called DELPHI, to capture the effects of vaccinations and the variability in mortality rates across subpopulations. It then integrates this predictive model into a prescriptive model to optimize vaccine allocation, formulated as a bilinear, non-convex optimization model. To solve it, this paper proposes a coordinate descent algorithm that iterates between optimizing vaccine allocations and simulating the dynamics of the pandemic. We implement the model and algorithm using real-world data in the United States. All else equal, the optimized vaccine allocation prioritizes states with a large number of projected cases and sub-populations facing higher risks (e.g., older ones). Ultimately, the optimized vaccine allocation can reduce the death toll of the pandemic by an estimated 10–25%, or 10,000–20,000 deaths over a three-month period in the United States alone.Highlights–This paper formulates an optimization model for vaccine allocation in response to the COVID-19 pandemic. This model, referred to as DELPHI–V–OPT, integrates a predictive epidemiological model into a prescriptive model to support the allocation of vaccines across geographic regions (e.g., US states) and across risk classes (e.g., age groups).–This paper develops a scalable coordinate descent algorithm to solve the DELPHI–V–OPT model. The proposed algorithm converges effectively and in short computational times. Therefore, the proposed approach can be implemented efficiently, and allows extensive sensitivity analyses for scenario planning and policy analysis.–Computational results demonstrate that optimized vaccine allocation strategies can curb the death toll of the COVID-19 pandemic by an estimated at 10–25%, or 10,000–20,000 deaths over a three-month period in the United States alone. These results highlight the critical role of vaccine allocation to combat the COVID-19 pandemic, in addition to vaccine design and vaccine production.


Sign in / Sign up

Export Citation Format

Share Document