scholarly journals The Zoltar forecast archive, a tool to standardize and store interdisciplinary prediction research

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Nicholas G. Reich ◽  
Matthew Cornell ◽  
Evan L. Ray ◽  
Katie House ◽  
Khoa Le

AbstractForecasting has emerged as an important component of informed, data-driven decision-making in a wide array of fields. We introduce a new data model for probabilistic predictions that encompasses a wide range of forecasting settings. This framework clearly defines the constituent parts of a probabilistic forecast and proposes one approach for representing these data elements. The data model is implemented in Zoltar, a new software application that stores forecasts using the data model and provides standardized API access to the data. In one real-time case study, an instance of the Zoltar web application was used to store, provide access to, and evaluate real-time forecast data on the order of 108rows, provided by over 40 international research teams from academia and industry making forecasts of the COVID-19 outbreak in the US. Tools and data infrastructure for probabilistic forecasts, such as those introduced here, will play an increasingly important role in ensuring that future forecasting research adheres to a strict set of rigorous and reproducible standards.

2021 ◽  
Author(s):  
Fahd Siddiqui ◽  
Mohammadreza Kamyab ◽  
Michael Lowder

Abstract The economic success of unconventional reservoirs relies on driving down completion costs. Manually measuring the operational efficiency for a multi-well pad can be error-prone and time-prohibitive. Complete automation of this analysis can provide an effortless real-time insight to completion engineers. This study presents a real-time method for measuring the time spent on each completion activity, thereby enabling the identification and potential cost reduction avenues. Two data acquisition boxes are utilized at the completion site to transmit both the fracturing and wireline data in real-time to a cloud server. A data processing algorithm is described to determine the start and end of these two operations for each stage of every well on the pad. The described method then determines other activity intervals (fracturing swap-over, wireline swap-over, and waiting on offset wells) based on the relationship between the fracturing and wireline segments of all the wells. The processed data results can be viewed in real-time on mobile or computers connected to the cloud. Viewing the full operational time log in real-time helps engineers analyze the whole operation and determine key performance indicators (KPIs) such as the number of fractured stages per day, pumping percentage, average fracture, and wireline swap-over durations for a given time period. In addition, the performance of the day and night crews can be evaluated. By plotting a comparison of KPIs for wireline and fracturing times, trends can be readily identified for improving operational efficiency. Practices from best-performing stages can be adopted to reduce non-pumping times. This helps operators save time and money to optimize for more efficient operations. As the number of wells increases, the complexity of manual generation of time-log increases. The presented method can handle multi-well fracturing and wireline operations without such difficulty and in real-time. A case study is also presented, where an operator in the US Permian basin used this method in real-time to view and optimize zipper operations. Analysis indicated that the time spent on the swap over activities could be reduced. This operator set a realistic goal of reducing 10 minutes per swap-over interval. Within one pad, the goal was reached utilizing this method, resulting in reducing 15 hours from the total pad time. The presented method provides an automated overview of fracturing operations. Based on the analysis, timely decisions can be made to reduce operational costs. Moreover, because this method is automated, it is not limited to single well operations but can handle multi-well pad completion designs that are commonplace in unconventionals.


2021 ◽  
Author(s):  
Jason Hunter ◽  
Mark Thyer ◽  
Dmitri Kavetski ◽  
David McInerney

<p>Probabilistic predictions provide crucial information regarding the uncertainty of hydrological predictions, which are a key input for risk-based decision-making. However, they are often excluded from hydrological modelling applications because suitable probabilistic error models can be both challenging to construct and interpret, and the quality of results are often reliant on the objective function used to calibrate the hydrological model.</p><p>We present an open-source R-package and an online web application that achieves the following two aims. Firstly, these resources are easy-to-use and accessible, so that users need not have specialised knowledge in probabilistic modelling to apply them. Secondly, the probabilistic error model that we describe provides high-quality probabilistic predictions for a wide range of commonly-used hydrological objective functions, which it is only able to do by including a new innovation that resolves a long-standing issue relating to model assumptions that previously prevented this broad application.  </p><p>We demonstrate our methods by comparing our new probabilistic error model with an existing reference error model in an empirical case study that uses 54 perennial Australian catchments, the hydrological model GR4J, 8 common objective functions and 4 performance metrics (reliability, precision, volumetric bias and errors in the flow duration curve). The existing reference error model introduces additional flow dependencies into the residual error structure when it is used with most of the study objective functions, which in turn leads to poor-quality probabilistic predictions. In contrast, the new probabilistic error model achieves high-quality probabilistic predictions for all objective functions used in this case study.</p><p>The new probabilistic error model and the open-source software and web application aims to facilitate the adoption of probabilistic predictions in the hydrological modelling community, and to improve the quality of predictions and decisions that are made using those predictions. In particular, our methods can be used to achieve high-quality probabilistic predictions from hydrological models that are calibrated with a wide range of common objective functions.</p>


2014 ◽  
Vol 13 (6) ◽  
pp. 260-263
Author(s):  
Geeta Rana ◽  
Alok Kumar Goel ◽  
Ajay Kumar Saini

Purpose – This paper aims to examine the issues of knowledge transfer in international strategic alliance within Hero Moto Corp. Ltd., an Indian multinational company. International Strategic alliances have been increasing in numbers in the past decades and transfer of knowledge and its transfer in multinational companies is wider debate. The case explores the complex issues involved in cross-organization and cross-country transfer of knowledge. The company has forged a strategic alliance with the US-based Erik Buell Racing for accessing technology and design inputs. Design/methodology/approach – It presents a structured case study that examines a wide range of knowledge transfer issues of international strategic alliance. Findings – It reveals that a major influencing factor is the national culture of the parents and that of the host country which provides the context with in which alliances are operate. It is also explored the ways in which the multi-parentage of strategic alliances influences their Human Resource Management (HRM) policies and practices. Originality/value – It provides plenty of useful information on an issue that affects virtually every employee and organization.


2013 ◽  
Vol 11 (3 and 4) ◽  
Author(s):  
Austin Vance ◽  
Trevor Cickovski

Behavior-Driven Development (BDD) is a software design methodology which bridges the developer-client gap by evolving software through communication between the two sides and shaping it to the goals of shareholders. As a recently published iterative development strategy, BDD is slowly being adopted as a software practice in a wide range of domains. We study the applicability of BDD to designing Narwhal, a classroom drawing application that mimics a combination of PowerPoint slides and whiteboard. Through this case study, we employ junior and senior seminar students as clients and view the effects of BDD on Narwhal’s evolution over a three-month period. We conclude with a discussion on the general applicability of BDD to the design of classroom tools following lessons learned from this case study.


Author(s):  
Xingmin Wang ◽  
Shengyin Shen ◽  
Debra Bezzina ◽  
James R. Sayer ◽  
Henry X. Liu ◽  
...  

Ann Arbor Connected Vehicle Test Environment (AACVTE) is the world’s largest operational, real-world deployment of connected vehicles (CVs) and connected infrastructure, with over 2,500 vehicles and 74 infrastructure sites, including intersections, midblocks, and highway ramps. The AACVTE generates a massive amount of data on a scale not seen in the traditional transportation systems, which provides a unique opportunity for developing a wide range of connected vehicle (CV) applications. This paper introduces a data infrastructure that processes the CV data and provides interfaces to support real-time or near real-time CV applications. There are three major components of the data infrastructure: data receiving, data pre-processing, and visualization including the performance measurements generation. The data processing algorithms include signal phasing and timing (SPaT) data compression, lane phase mapping identification, trajectory data map matching, and global positioning system (GPS) coordinates conversion. Simple performance measures are derived from the processed data, including the time–space diagram, vehicle delay, and observed queue length. Finally, a web-based interface is designed to visualize the data. A list of potential CV applications including traffic state estimation, traffic control, and safety, which can be built on this connected data infrastructure is discussed.


2016 ◽  
Vol 859 ◽  
pp. 110-115
Author(s):  
Corina Monica Pop ◽  
Gheorghe Leonte Mogan

Creating a special Web software application that will present the library services and resources requires special skills in the field of Web technologies. The developed Web application allows quick access to important information about the library since it requires only a few taps on the touch screen to access and view the desired information. The software is intended to enhance user real time access to information regarding the library’s newest book acquisitions, databases free trial, events.


2020 ◽  
pp. 002200942092588
Author(s):  
Timothy J. Minchin

This article provides the first in-depth historical case study of Honda’s assembly plant in Lincoln, Alabama. Established in 1999, the plant became one of the biggest auto factories in the US, employing over 4,500 workers. Drawing on a wide range of sources, including rare interviews with top company and state officials, it argues that the establishment of the plant did not just reflect financial subsidies, the prevailing view in the press (and limited academic literature). Other factors were crucial, including site location, the availability of a willing and qualified workforce, and union avoidance. The personal intervention of state leaders, especially Governor Don Siegelman, is also uncovered here. In a broader context, the article illuminates the globalization of the car industry. In 2018 foreign-owned companies accounted for half of US auto production, and Alabama was a leading producer. Despite this, the burgeoning sector has been overlooked by scholars compared to domestically-owned carmakers. Honda in Alabama has been particularly neglected, yet its story is significant and distinctive. A highly successful plant, Honda Manufacturing of Alabama had one of the highest levels of domestic content in the country, along with more American managers than Honda’s better-known factory in Marysville, Ohio, and a more diverse work force.


Author(s):  
Shikai Guo ◽  
Rong Chen ◽  
Hui Li ◽  
Jian Gao ◽  
Yaqing Liu

Crowdsourcing carried out by cyber citizens instead of hired consultants and professionals has become increasingly an appealing solution to test the feature rich and interactive web. Despite having various online crowdsourcing testing services, the benefits of exposure to a wider audience and harnessing the collective efforts of individuals remain uncertain, especially when the quality control is problematic in an open environment. The objective of this paper is to propose a real-time collaborative testing approach (RCTA) to create a productive crowdsourced testing on a dynamic Internet. We implemented a prototype crowdsourcing system XTurk, and carried out a case study, to understand the crowdsourced testers behavior, the trustworthiness, the execution time of test cases and accuracy of feedback. Several experiments are carried out and experimental results validate the quality, efficiency and reliability of the present approach and the positive testing feedback is are shown to outperform the previous methods.


Author(s):  
Jamuna S Murthy ◽  
Siddesh G.M. ◽  
Srinivasa K.G.

Trend analysis over Twitter offers organizations a fast and effective way of predicting the future trends. In the recent years, a wide range of indicators and methods were used for predicting the trend on Twitter with varying results, unfortunately most of the research focused only on the emerging trends which has gained long-term attention on the Twitter platform. This article depicts trend variations, i.e. to predict whether the trend on Twitter will gain attention or not in the next few hours. Hence a novel method called: “Twitter Trend Momentum (TTM)” is introduced for trend prediction which is the enhancement of a well-known stock market indicator called moving average convergence divergence (MACD). Reason analysis for trend variation is also carried out as an extension to the authors' research work. An evaluation of the framework showed the best results which are applied to build a real-time web application called “TwitTrend.” The application acts as a real-time update and recommendation system of top trends to users.


2008 ◽  
Vol 35 (1) ◽  
pp. 133 ◽  
Author(s):  
David A. Swanson

This case study deals with a problem quite different than the typical one facing most applied demographers. It involves the identification of a “population” using a set of criteria established by a regulatory agency. Specifically, criteria established by the US Nuclear Regulatory Commission for purposes of Site Characterization of the High Level Nuclear Waste Repository proposed for Yucca Mountain, Nevada. Consistent with other recent studies, this one suggests that a wide range of skills may be needed in dealing with problems posed to applied demographers by clients and users in the 21st century. As such, budding applied demographers, especially those nearing completion of their graduate studies, should consider adopting a set of skills beyond traditional demography.


Sign in / Sign up

Export Citation Format

Share Document