Modernization of the Equipment of a Head Compressor Station

Author(s):  
Stefan Janßen ◽  
Peter Pätzold ◽  
Axel Emde ◽  
Rainer Kurz

The Waidhaus compressor station in Germany is a key compressor station for the supply of Natural Gas from Russia into the German and the European market. The required flexibility, together with the goal of a small environmental footprint, has led to some unique, but widely applicable solutions. Since this head compressor station of the MEGAL pipeline system is of highest importance for the European gas supply, requirements for high availability and reliability of the equipment are significant. The paper consists of two parts: In the first part it describes the challenges of installing new equipment in an existing compressor station, details the characteristics of the installed equipment, and, in particular, discusses the steps that have been taken to assure the required high availability, high reliability, and high flexibility. In the second part, the acceptance test is described, taking into account site specific limitations, and how a highly accurate site test can be executed. The tested unit was accepted based on the acceptance test described. The methodology is useful for the conduct and execution of site performance tests. The paper thus provides insight into the decision making, installation, and acceptance process for the specific situation of a strategically important brownfield compressor station.

Author(s):  
John R. Devaney

Occasionally in history, an event may occur which has a profound influence on a technology. Such an event occurred when the scanning electron microscope became commercially available to industry in the mid 60's. Semiconductors were being increasingly used in high-reliability space and military applications both because of their small volume but, also, because of their inherent reliability. However, they did fail, both early in life and sometimes in middle or old age. Why they failed and how to prevent failure or prolong “useful life” was a worry which resulted in a blossoming of sophisticated failure analysis laboratories across the country. By 1966, the ability to build small structure integrated circuits was forging well ahead of techniques available to dissect and analyze these same failures. The arrival of the scanning electron microscope gave these analysts a new insight into failure mechanisms.


Author(s):  
Janet R. Meyer

The messages spoken in everyday conversation are influenced by participants’ goals. Interpersonal scholars have distinguished two types of goals thought to influence the wording of a message: instrumental goals (primary goals) and secondary goals. An instrumental goal is related to a speaker’s primary reason for designing the message. Instrumental goals would include goals such as to ask for a favor, seek information, apologize, give advice, or change the other person’s opinion. Secondary goals pertain to more general concerns. They include goals such as to manage one’s impression, avoid offending the hearer, and act consistently with one’s values. The ability to design a message that pursues an instrumental goal effectively while also addressing (or at least not conflicting with) relevant secondary goals is associated with greater communication competence. Considerable research has sought to explain differences in the ability to design messages that effectively address multiple goals. One such factor appears to be the extent to which a speaker can adapt the language of a message to the communication-relevant features of a specific situation or hearer. If a speaker’s primary goal is to seek a favor, relevant situation features may include the speaker’s right to ask, expected resistance, and qualities of the speaker–hearer relationship. A second behavior associated with the ability to produce multiple-goal messages is suggested by research on cognitive editing. The latter research indicates that the likelihood of producing a message that addresses relevant secondary goals will sometimes depend upon whether a speaker becomes aware, prior to speaking, that a planned message could have an unwanted outcome (e.g., the message may offend the hearer). When such outcomes are anticipated in advance, the message may be left unspoken or edited prior to speaking. The ability to produce a message that achieves a speaker’s goals may also depend on the type of planning that precedes the design of a message. The plan-based theory of strategic communication views plans as hierarchical structures that specify goals and actions at different levels of specificity. The theory holds that a person pursuing a goal first tries to retrieve from memory a preexisting plan that could be modified for the current situation. When that is not possible, speakers must formulate a novel plan. Research employing indicants of fluency suggests that formulating a novel plan (which requires changes at a higher, more abstract level of a plan) makes heavier demands on limited capacity than does modifying an existing plan at a lower level of the hierarchy (e.g., speaking more slowly). Insight into how persons plan what to say has also come from research on imagined interactions, conflict management, anticipating obstacles to compliance, and verbal disagreement tasks. In an effort to better understand the design of messages in interpersonal settings, a number of scholars have proposed models of the cognitive processes and structures thought to be involved in designing, editing, and producing such messages. Action models of this sort, which generate testable hypotheses, draw from work in artificial intelligence, cognitive models of language production, and research on social cognition. Three such models are action assembly theory, the cognitive rules model, and the implicit rules model.


Author(s):  
Andrew R. Lutz ◽  
Thomas A. Bubenik

The Pipeline and Hazardous Material Safety Administration (PHSMA) has increased emphasis on records that are “traceable, verifiable, and complete.” Organizing records into a document structure that is traceable, verifiable, and complete can be a daunting task. Through work with operators, Det Norske Veritas (U.S.A.) Inc. (DNV) identified a methodology to efficiently search and organize material property data and records into a structure that is fit for regulatory audit. The methodology consists of four steps: (1) Search/Organize Documentation. (2) Digitally Capture Paper Documents. (3) Determine Document Precedence. (4) Create a Reference-able Listing. The first step reviews all files and records and identifies records that are pertinent to properties verification. The search is conducted at an operator’s office(s) by a team of personnel familiar with pipeline construction and maintenance documentation. Once records have been identified, they are digitally captured (scanned) making them easy to reference. This requires a set of metadata and unique name for each document. The metadata consists of project number, document type (maintenance form, drawing, etc…), pipeline name, and information location. Document precedence is used to identify documents most likely to contain correct material information. Document precedence is determined with operator employees that can identify document(s) that have been historically given high reliability. Finally, a listing tabulates material properties along with the unique document name(s) for the specific records. The listing contains pipe (by segment or joint), fittings (valves, prefabricated elbows, etc…), and other components that may affect Maximum (Allowable) Operating Pressures. Typically the listing uses linear pipeline stationing as the main reference. Implementation of the methodology yields a listing of material properties specifically linked to a digital document database — i.e., a records system that is “traceable, verifiable, and complete.” In addition to material properties, this methodology has also been applied to risk-related information (e.g. cathodic protection, crossings, coating information, etc…). The listing can then be used to identify any information gaps and potentially prioritize them based on reliability.


2014 ◽  
Vol 29 (2) ◽  
pp. 125-145 ◽  
Author(s):  
Evan H. Offstein ◽  
Raymond Kniphuisen ◽  
D. Robin Bichy ◽  
J. Stephen Childers Jr

Purpose – Recent lapses in the management of high hazard organizations, such as the Fukushima event or the Deepwater Horizon blast, add considerable urgency to better understand the complicated and complex phenomena of leading and managing high reliability organizations (HRO). The purpose of this paper is to offer both theoretical and practical insight to further strengthen reliability in high hazard organizations. Design/methodology/approach – Phenomenological study based on over three years of research and thousands of hours of study in HROs conducted through a scholar-practitioner partnership. Findings – The findings indicate that the identification and the management of competing tensions arising from misalignment within and between public policy, organizational strategy, communication, decision-making, organizational learning, and leadership is the critical factor in explaining improved reliability and safety of HROs. Research limitations/implications – Stops short of full-blown grounded theory. Steps were made to ensure validity; however, generalizability may be limited due to sample. Practical implications – Provides insight into reliably operating organizations that are crucial to society where errors would cause significant damage or loss. Originality/value – Extends high reliability research by investigating more fully the competing tensions present in these complex, societally crucial organizations.


1991 ◽  
Vol 113 (2) ◽  
pp. 121-128 ◽  
Author(s):  
R. G. Ross

Differential expansion induced fatigue resulting from temperature cycling is a leading cause of solder joint failures in spacecraft. Achieving high reliability flight hardware requires that each element of the fatigue issue be addressed carefully. This includes defining the complete thermal-cycle environment to be experienced by the hardware, developing electronic packaging concepts that are consistent with the defined environments, and validating the completed designs with a thorough qualification and acceptance test program. This paper describes a useful systems approach to solder fatigue based principally on the fundamental log-strain versus log-cycles-to-failure behavior of fatigue. This fundamental behavior has been useful to integrate diverse ground test and flight operational thermal-cycle environments into a unified electronics design approach. Each element of the approach reflects both the mechanism physics that control solder fatigue, as well as the practical realities of the hardware build, test, delivery, and application cycle.


Author(s):  
A. James Hoshizaki

In October 1995, NOVA Gas Transmission Ltd. (NGTL) commissioned the first mechanical drive application of Solar Turbines’ Taurus™ 70S gas turbine. The unit was installed as a part of a turbine/compressor package at a compressor station on NGTL’s natural gas pipeline system. As this first installation was a part of a development test program by Solar Turbines, field evaluation was conducted subsequent to the original commissioning and related testing. This paper presents NGTL’s experiences in commissioning, startup and operation. Field performance test results for the gas turbine are presented and focus on output power, thermal efficiency and exhaust emissions. Some of the findings and observations from the field evaluation tests performed by Solar are also discussed. In addition, a description of the facility in which the turbine/compressor package is installed is provided.


Author(s):  
Pradeep Lall ◽  
Mahendra Harsha ◽  
Jeff Suhling ◽  
Kai Goebel

Electronics in high reliability applications may be stored for extended periods of time prior to deployment. Prior studies have shown the elastic modulus and ultimate tensile strength of the SAC leadfree alloys reduces under prolonged exposure to high temperatures [Zhang 2009]. The thermal cycle magnitudes may vary over the lifetime of the product. Long-life systems may be re-deployed several times over the use life of the product. Previously, the authors have identified damage pre-cursors for correlation of the damage progression with the microstructural evolution of damage in second level interconnects [Lall 2004a-d, 2005a-b, 2006a-f, 2007a-e, 2008a-f, 2009a-d, 2010a-j]. Leadfree assemblies with Sn3Ag0.5Cu solder have been subjected to variety of thermal aging conditions including 60°C, 85°C and 125°C for periods of time between 1-week and 2-months, thermal cycling from −55°C to 125°C, −40°C to 95°C and 3°C to 100°C. The presented methodology uses leading indicators of failure based on microstructural evolution of damage to identify accrued damage in electronic systems subjected to sequential stresses of thermal aging and thermal cycling. Damage equivalency relationships have been developed to map damage accrued in thermal aging to the reduction in thermo-mechanical cyclic life based on damage proxies. Accrued damage between different thermal cyclic magnitudes has also been mapped for from −55°C to 125°C, −40°C to 95°C and 3°C to 100°C thermal cycles. The presented method for interrogation of the accrued damage for the field deployed electronics, significantly prior to failure, may allow insight into the damage initiation and progression of the deployed system. The expected error with interrogation of system state and assessment of residual life has been quantified.


2021 ◽  
Vol 143 (4) ◽  
Author(s):  
Yichen Li ◽  
Jing Gong ◽  
Weichao Yu ◽  
Weihe Huang ◽  
Kai Wen

Abstract At present, China has a developing natural gas market, and ensuring the security of gas supply is an issue of high concern. Gas supply reliability, the natural gas pipeline system's ability to satisfy the market demand, is determined by both supply side and demand side and is usually adopted by the researches to measure the security of gas supply. In the previous study, the demand side is usually simplified by using load duration curve (LDC) to describe the demand, which neglects the effect of demand side management. The simplification leads to the inaccurate and unreasonable assessment of the gas supply reliability, especially in high-demand situation. To overcome this deficiency and achieve a more reasonable result of gas supply reliability, this paper extends the previous study on demand side by proposing a novel method of management on natural gas demand side, and the effects of demand side management on gas supply reliability is analyzed. The management includes natural gas prediction models for different types of users, the user classification rule, and the demand adjustment model based on user classification. First, an autoregressive integrated moving average (ARIMA) model and a support vector machine (SVM) model are applied to predict the natural gas demand for different types of users, such as urban gas distributor (including residential customer, commercial customer, small industrial customer), power plant, large industrial customer, and compressed natural gas (CNG) station. Then, the user classification rule is built based on users' attribute and impact of supplied gas's interruption or reduction. Natural gas users are classified into four levels. (1) demand fully satisfied, (2) demand slightly reduced, (3) demand reduced, and (4) demand interrupted. The user classification rule also provides the demand reduction range of different users. Moreover, the optimization model of demand adjustment is built, and the objective of the model is to maximize the amount of gas supplied to each user based on the classification rule. The constraints of the model are determined by the classification rule, including the demand reduction range of different users. Finally, the improved method of gas supply reliability assessment is developed and is applied to the case study of our previous study derived from a realistic natural gas pipeline system operated by PetroChina to analyze the effects of demand side management on natural gas pipeline system's gas supply reliability.


2011 ◽  
Vol 291-294 ◽  
pp. 2643-2646 ◽  
Author(s):  
Xiao Ming Sheng

This paper introduces digital control hydraulic variable pump based on mechatronic-hydraulic- air integration driven by compressed air. By inserting the toggle force amplifier in between double-acting rodless air cylinder and variable pump, this pump, driven by compressed air, utilizes small forced air to create big output power. The displacement of air cylinder piston is measured by using digitally linear displacement sensor and it is compared with input signal by computer in order to control pressure and flow rate output by hydraulic cylinder. This pump provides high flexibility and high reliability, simple structure and compact; it is easier to control by artificial intelligence and it will be widely used in the industry.


Author(s):  
Joel Smith ◽  
Jaehee Chae ◽  
Shawn Learn ◽  
Ron Hugo ◽  
Simon Park

Demonstrating the ability to reliably detect pipeline ruptures is critical for pipeline operators as they seek to maintain the social license necessary to construct and upgrade their pipeline systems. Current leak detection systems range from very simple mass balances to highly complex models with real-time simulation and advanced statistical processing with the goal of detecting small leaks around 1% of the nominal flow rate. No matter how finely-tuned these systems are, however, they are invariably affected by noise and uncertainties in a pipeline system, resulting in false alarms that reduce system confidence. This study aims to develop a leak detection system that can detect leaks with high reliability by focusing on sudden-onset leaks of various sizes (ruptures), as opposed to slow leaks that develop over time. The expected outcome is that not only will pipeline operators avoid the costs associated with false-alarm shut downs, but more importantly, they will be able to respond faster and more confidently in the event of an actual rupture. To accomplish these goals, leaks of various sizes are simulated using a real-time transient model based on the method of characteristics. A novel leak detection model is presented that fuses together several different preprocessing techniques, including convolution neural networks. This leak detection system is expected to increase operator confidence in leak alarms, when they occur, and therefore decrease the amount of time between leak detection and pipeline shutdown.


Sign in / Sign up

Export Citation Format

Share Document