The Need for a Comprehensive Cement Database - A Novel Approach to Best Practices by Cataloging Cement Properties

2021 ◽  
Author(s):  
Jocin James Abraham ◽  
Cameron Devers ◽  
Catalin Teodoriu ◽  
Mahmood Amani

Abstract In conventional well design, the cement sheath acts as one of the primary barriers of protection in the well integrity matrix. Once the wellbore cement is set, the well is exposed to various conditions and environments over time which can impact the integrity of the cement, the results of which are poorly studied and documented. Given that there are also multiple cement recipes and formulations – the task of studying downhole cement performance and categorizing said results becomes more complicated, requiring the need for an integrated database of information. The objective of this paper is to document desirable cement properties, develop an optimal method for presenting this data, and construct a database which integrates this information and allows streamlined data entry and retrieval. Multiple variables must be considered when aggregating cement recipes and its corresponding properties over time. To test the behavior of these cement recipes over time, samples are created and aged in various controlled environments, and the properties tested periodically. The database was developed with a suitable interface to provide intuitive data entry and practical analysis capability, with proper inputs for the types of cement used, additives added, properties of the cement mixtures over time and any corresponding analysis performed on the samples in order to maximize best practice. Differences in geology, drilling techniques and standards often require the use of different cement recipes with varied additives to cater to each job. These include accelerators, retarders, extenders, weighing agents, fluid and loss control additives, as well as special additives such as latex, fiber additives and foam cements. The database interface is designed to accommodate these variations in the cement recipes and track the properties of samples over time and give a comprehensive understanding of the behavior of the samples as they age. With information from the industry, literature, and laboratory experiments, properties such as the Unconfined Compressive Strength (UCS), thickening times, gel strength development, densities, to name few will be integrated in the data base. Data analytics strategies will also be applied on the information aggregated, and the properties of the samples over time will be correlated to field data as well as literature to ensure proper representation and accuracy of the data. The database and the knowledge collected will be utilized as a source of information to enhance common cementing practices, as well as develop and refine industry best practices, which will be applicable to any cement job in the world. Currently, the database presented in this paper contains over 1000 unique cement samples, prepared and documented by multiple individuals with an aim to create a unique cement repository and database that focuses on long term cement properties.

Author(s):  
Punya Prasad Sapkota ◽  
Kashif Siddiqi

One in every 70 people around the world is caught up in a crisis (natural disasters, conflict, climate change, etc.) and urgently needs humanitarian assistance and protection according to the OCHA. The humanitarian community assists millions of people every year based on emerging humanitarian needs. Most of the time, the conditions inside the countries, once the humanitarian needs data is collected, are not very conducive and required simple ways to collect data like paper-based data collection with simple questions. This data is later entered into a database or spreadsheet using rigorous and time-consuming data entry efforts. Dynamic changes in needs of people; numbers of partners involved; the complexity of evolving processes; and emerging technologies over time has led to a change in processes for data collection and management. This article is an attempt to capture humanitarian data collection best practices and the use of different technologies in managing data to facilitate humanitarian needs assessment processes for the Syria crisis.


2002 ◽  
Vol 31 (4) ◽  
pp. 507-516 ◽  
Author(s):  
Linda Bjornberg ◽  
Susan DellCioppia ◽  
Kelly Tanzer

The IPMA HR Benchmarking Committee has identified a series of successful approaches—“best practices”—in key human resource areas from the 2001–2002 HR Benchmarking Survey. The primary purpose of the benchmarking project is to provide human resource practitioners with tools, models, skills, methods, and data to improve the effectiveness of their human resource programs for their customers. The main goal of the benchmarking project is to: identify, measure, and share the best practices of leading HR organizations so that others can compare their practices to these HR organizations and identify opportunities to improve their own organizations. Rarely can a program or solution seamlessly transfer to every other organization, but the IPMA HR Benchmarking Committee will feature successful models for HR professionals to review and determine whether they may adopt or adapt the practice—or elements of the practice—in their own organizations. In conjunction with IPMA's available benchmarking data, the HR Benchmarking Committee will use the following criteria to determine which agencies have “potential” best practices. Training and Development was one of the “best practice” program areas identified. Linda Bjornberg, a member of the Benchmarking Committee, discusses the innovative and successful efforts of the selected HR organizations in measuring the impact of training on their organizations' missions. Successful over time Quantitative and/or qualitative results Recognized or recognizable positive outcomes — customer satisfaction — positive impact Innovative Replicable — transferable with modifications — portable — adds value by improving service, quality and/or productivity Meaningful to Users of the Benchmarking Site


2020 ◽  
Vol 42 (1) ◽  
pp. 37-103
Author(s):  
Hardik A. Marfatia

In this paper, I undertake a novel approach to uncover the forecasting interconnections in the international housing markets. Using a dynamic model averaging framework that allows both the coefficients and the entire forecasting model to dynamically change over time, I uncover the intertwined forecasting relationships in 23 leading international housing markets. The evidence suggests significant forecasting interconnections in these markets. However, no country holds a constant forecasting advantage, including the United States and the United Kingdom, although the U.S. housing market's predictive power has increased over time. Evidence also suggests that allowing the forecasting model to change is more important than allowing the coefficients to change over time.


Symmetry ◽  
2020 ◽  
Vol 13 (1) ◽  
pp. 9
Author(s):  
John H. Graham

Best practices in studies of developmental instability, as measured by fluctuating asymmetry, have developed over the past 60 years. Unfortunately, they are haphazardly applied in many of the papers submitted for review. Most often, research designs suffer from lack of randomization, inadequate replication, poor attention to size scaling, lack of attention to measurement error, and unrecognized mixtures of additive and multiplicative errors. Here, I summarize a set of best practices, especially in studies that examine the effects of environmental stress on fluctuating asymmetry.


Author(s):  
Paula Corabian ◽  
Bing Guo ◽  
Carmen Moga ◽  
N. Ann Scott

AbstractObjectivesThis article retrospectively examines the evolution of rapid assessments (RAs) produced by the Health Technology Assessment (HTA) Program at the Institute of Health Economics over its 25-year relationship with a single requester, the Alberta Health Ministry (AHM).MethodsThe number, types, and methodological attributes of RAs produced over the past 25 years were reviewed. The reasons for developmental changes in RA processes and products over time were charted to document the push–pull tension between AHM needs and the HTA Program's drive to meet those needs while responding to changing methodological benchmarks.ResultsThe review demonstrated the dynamic relationship required for HTA researchers to meet requester needs while adhering to good HTA practice. The longstanding symbiotic relationship between the HTA Program and the AHM initially led to increased diversity in RA types, followed by controlled extinction of the less fit (useful) “transition species.” Adaptations in RA methodology were mainly driven by changes in best practice standards, requester needs, the healthcare environment, and staff expertise and technology.ConclusionsRAs are a useful component of HTA programs. To remain relevant and useful, RAs need to evolve according to need within the constraints of HTA best practice.


2020 ◽  
Vol 18 (1) ◽  
Author(s):  
Samantha Chakraborty ◽  
Bianca Brijnath ◽  
Jacinta Dermentzis ◽  
Danielle Mazza

Abstract Background There is no standardised protocol for developing clinically relevant guideline questions. We aimed to create such a protocol and to apply it to developing a new guideline. Methods We reviewed international guideline manuals and, through consensus, combined steps for developing clinical questions to produce a best-practice protocol that incorporated qualitative research. The protocol was applied to develop clinical questions for a guideline for general practitioners. Results A best-practice protocol incorporating qualitative research was created. Using the protocol, we developed 10 clinical questions that spanned diagnosis, management and follow-up. Conclusions Guideline developers can apply this protocol to develop clinically relevant guideline questions.


Author(s):  
Shyam Prabhakaran ◽  
Renee M Sednew ◽  
Kathleen O’Neill

Background: There remains significant opportunities to reduce door-to-needle (DTN) times for stroke despite regional and national efforts. In Chicago, Quality Enhancement for the Speedy Thrombolysis for Stroke (QUESTS) was a one year learning collaborative (LC) which aimed to reduce DTN times at 15 Chicago Primary Stroke Centers. Identification of barriers and sharing of best practices resulted in achieving DTN < 60 minutes within the first quarter of the 2013 initiative and has sustained progress to date. Aligned with Target: Stroke goals, QUESTS 2.0, funded for the 2016 calendar year, invited 9 additional metropolitan Chicago area hospitals to collaborate and further reduce DTN times to a goal < 45 minutes in 50% of eligible patients. Methods: All 24 hospitals participate in the Get With The Guidelines (GWTG) Stroke registry and benchmark group to track DTN performance improvement in 2016. Hospitals implement American Heart Association’s Target Stroke program and share best practices uniquely implemented at sites to reduce DTN times. The LC included a quality and performance improvement leader, a stroke content expert, site visits and quarterly meetings and learning sessions, and reporting of experiences and data. Results: In 2015, the year prior to QUESTS 2.0, the proportion of patients treated with tPA within 45 minutes of hospital arrival increased from 21.6% in Q1 to 31.4% in Q2. During the 2016 funded year, this proportion changed from 31.6% in Q1 to 48.3% in Q2. Conclusions: Using a learning collaborative model to implement strategies to reduce DTN times among 24 Chicago area hospitals continues to impact times. Regional collaboration, data sharing, and best practice sharing should be a model for rapid and sustainable system-wide quality improvement.


2021 ◽  
Author(s):  
Abigail S. L. Lewis ◽  
Whitney M. Woelmer ◽  
Heather L. Wander ◽  
Dexter W. Howard ◽  
John W. Smith ◽  
...  

Near-term iterative forecasting is a powerful tool for ecological decision support and has the potential to transform our understanding of ecological predictability. However, to this point, there has been no cross-ecosystem analysis of near-term ecological forecasts, making it difficult to synthesize diverse research efforts and prioritize future developments for this emerging field. In this study, we analyzed 178 near-term ecological forecasting papers to understand the development and current state of near-term ecological forecasting literature and compare forecast skill across ecosystems and variables. Our results indicate that near-term ecological forecasting is widespread and growing: forecasts have been produced for sites on all seven continents and the rate of forecast publication is increasing over time. As forecast production has accelerated, a number of best practices have been proposed and application of these best practices is increasing. In particular, data publication, forecast archiving, and workflow automation have all increased significantly over time. However, adoption of proposed best practices remains low overall: for example, despite the fact that uncertainty is often cited as an essential component of an ecological forecast, only 45% of papers included uncertainty in their forecast outputs. As the use of these proposed best practices increases, near-term ecological forecasting has the potential to make significant contributions to our understanding of predictability across scales and variables. In this study, we found that forecast skill decreased in predictable patterns over 1–7 day forecast horizons. Variables that were closely related (i.e., chlorophyll and phytoplankton) displayed very similar trends in predictability, while more distantly related variables (i.e., pollen and evapotranspiration) exhibited significantly different patterns. Increasing use of proposed best practices in ecological forecasting will allow us to examine the forecastability of additional variables and timescales in the future, providing a robust analysis of the fundamental predictability of ecological variables.


2013 ◽  
Vol 8 (4) ◽  
pp. 110 ◽  
Author(s):  
Jackie Druery ◽  
Nancy McCormack ◽  
Sharon Murphy

Objective - The term “best practice” appears often in library and information science literature, yet, despite the frequency with which the term is used, there is little discussion about what is meant by the term and how one can reliably identify a best practice. Methods – This paper reviews 113 articles that identify and discuss best practices, in order to determine how “best practices” are distinguished from other practices, and whether these determinations are made on the basis of consistent and reliable evidence. The review also takes into account definitions of the term to discover if a common definition is used amongst authors. Results – The “evidence” upon which papers on “best practices” are based falls into one of the following six categories: 1) opinion (n=18, 15%), 2) literature reviews (n=13, 12%), 3) practices in the library in which the author works (n=19, 17%), 4) formal and informal qualitative and quantitative approaches (n=16, 14%), 5) a combination of the aforementioned (i.e., combined approaches) (n=34, 30%), and 6) “other” sources or approaches which are largely one of a kind (n=13, 12%). There is no widely shared or common definition of “best practices” amongst the authors of these papers, and most papers (n=94, 83%) fail to define the term at all. The number of papers was, for the most part, split evenly amongst the six categories indicating that writers on the subject are basing “best practices” assertions on a wide variety of sources and evidence. Conclusions – Library and information science literature on “best practices” is rarely based on rigorous empirical methods of research and therefore is generally unreliable. There is, in addition, no widely held understanding of what is meant by the use of the term.


Sign in / Sign up

Export Citation Format

Share Document