scholarly journals Designs for Reliability and Failure Mode Prevention of Electrical Feedthroughs in Integrated Downhole Logging Tools

2021 ◽  
Vol 18 (4) ◽  
pp. 161-167
Author(s):  
Hua Xia ◽  
Nelson Settles ◽  
Michael Grimm ◽  
Gaery Rutherford ◽  
David DeWire

Abstract To enable an electrical feedthrough integrated down-hole logging tool to maintain high reliability during its logging service in any hostile wellbores, it is critical to apply some guidelines for the electrical feedthrough designs. This paper introduces a safety factor-based design guideline to ensure an integrated electrical feedthrough has sufficient compression or thermomechanical stress amplitude in the stress well against potential logging failures. It is preferred to have a safety actor of 1.5–2.0 for an electrical feedthrough at lowest temperature, such as −60°C, and a safety actor of 2.5–5.0 at operating temperature range of 200–260°C. Moreover, the designed ambient pressure capability should be 1.5–2.0 times higher than the maximum downhole pressure, such as 25,000–30,000 PSI. To validate this thermomechanical stress model, several electrical feedthrough prototypes have been tested under simulated 200–260°C and 31,000–34,000 PSI downhole conditions. The observed testing data have demonstrated that there is a maximum allowable operating pressure for an electrical feedthrough operating at a specific downhole temperature. It is clearly demonstrated that an electrical feed-through may operate up to 60,000 PSI at ambient temperature in a real-life application, but it may actually operate up to 30,000–35,000 PSI at 200–260°C downhole temperatures.

2021 ◽  
Vol 2021 (HiTEC) ◽  
pp. 000100-000104
Author(s):  
Hua Xia ◽  
Nelson Settles ◽  
Michael Grimm ◽  
Gaery Rutherford ◽  
David DeWire

Abstract To enable an electrical feedthrough integrated downhole logging tool to maintain high reliability during its logging service in any hostile wellbores, it is critical to apply some guidelines for the electrical feedthrough designs. This paper introduces a safety factor based design guideline to ensure an integrated electrical feedthrough has sufficient compression or thermo-mechanical stress amplitude in the stress well against potential logging failures. It is preferred to have a safety actor of 1.5~2.0 for an electrical feedthrough at lowest temperature, such as −60°C, and a safety actor of 2.5~5.0 at operating temperature range of 200~260°C. Moreover, the designed ambient pressure capability should be 1.5~2.0 times higher than the maximum downhole pressure, such as 25,000~30,000PSI. To validate this thermo-mechanical stress model, several electrical feedthrough prototypes have been tested under simulated 200~260°C and 31,000~34,000PSI downhole conditions. The observed testing data have demonstrated that there is a maximum allowable operating pressure for an electrical feedthrough operating at a specific downhole temperature. It is clearly demonstrated that an electrical feedthrough may operate up to 60,000 PSI at ambient temperature in a real life application, but it may actually operate up to 30,000~35,000 PSI at 200~260°C downhole temperatures.


Author(s):  
Anna Bushinskaya ◽  
Sviatoslav Timashev

Correct assessment of the remaining life of distributed systems such as pipeline systems (PS) with defects plays a crucial role in solving the problem of their integrity. Authors propose a methodology which allows estimating the random residual time (remaining life) of transition of a PS from its current state to a critical or limit state, based on available information on the sizes of the set of growing defects found during an in line inspection (ILI), followed by verification or direct assessment. PS with many actively growing defects is a physical distributed system, which transits from one physical state to another. This transition finally leads to failure of its components, each component being a defect. Such process can be described by a Markov process. The degradation of the PS (measured as monotonous deterioration of its failure pressure Pf (t)) is considered as a non-homogeneous pure death Markov process (NPDMP) of the continuous time and discrete states type. Failure pressure is calculated using one of the internationally recognized pipeline design codes: B13G, B31Gmod, DNV, Battelle and Shell-92. The NPDMP is described by a system of non-homogeneous differential equations, which allows calculating the probability of defects failure pressure being in each of its possible states. On the basis of these probabilities the gamma-percent residual life of defects is calculated. In other words, the moment of time tγ is calculated, which is a random variable, when the failure pressure of pipeline defect Pf (tγ) > Pop, with probability γ, where Pop is the operating pressure. The developed methodology was successfully applied to a real life case, which is presented and discussed.


2019 ◽  
Vol 6 (2) ◽  
pp. 218-224 ◽  
Author(s):  
Daniel McNeish ◽  
Denis G. Dumas

Changes to educational policies have proliferated testing data to include multiple-administration assessments that repeatedly measure student performance over time. Psychometric models—extended for this type of data—estimate quantities typically associated with assessments that are given once, such as ability at a specific time point. This article considers how multiple-administration assessment offers the opportunity for models to estimate novel quantities that are not available from traditional single-administration assessments but may be of interest to educational researchers and stakeholders. Specifically, dynamic measurement models can directly estimate capacity—the expected future score once the construct of interest has fully developed. Preliminary evidence for this approach shows it may be less susceptible to effects of socioeconomic status and may improve predictions of future performance. An example with real-life operational assessment data is provided. Extensions and limitations for educational assessment are also discussed.


Author(s):  
J. L. Parham ◽  
Y. B. Guo ◽  
W. H. Sutton

With the fuel prices reaching record highs and ever-increasing tighter environmental policies, hydrogen-powered vehicles have great potential to substantially increase overall fuel economy, reduce vehicle emissions, and decrease dependence on foreign oil imports. While hydrogen fuel is exciting for automotive industries due to its potentials of significant technical and economic advantages, design and manufacture safe and reliable hydrogen tanks is recognized as the number one priority in hydrogen technology development and deployment. Real life testing of tank performance is extremely useful, but very time consuming, expensive, and lacks a rigorous scientific basis, which prohibits the development of a more reliable hydrogen tank. However, very few testing and simulation results can be found in public literature. This paper focused on the development of an efficient finite element analysis (FEA) tool to provide a more economical alternative for hydrogen tank analysis, though it may not be an all-out replacement for physical testing. A FEA model has been developed for the hydrogen tank with 6061-T6 aluminum liner and carbon-fiber/epoxy shell to investigate the tank integrity at pre-stresses of 45.5 MPa, 70 MPa, and 105 MPa and operating pressures of 35 MPa, 70 MPa, and 105 MPa. The residual stresses induced by different pre-stresses are at the equivalent level in the middle section but vary significantly in other tank sections. Residual stress magnitudes may saturate at a certain pre-stress level. In contrast, the residual strains in the middle section increases with pre-stress. The simulation results indicate that the optimal pre-stress level depends on the specific operating pressure to enhance tank integrity. A certain area of the neck and the top and bottom domes also experiences peak stress and strain at pre-stressing and regular operating pressures. The research findings may help manufacturing industries to build safety into manufacturing practices of hydrogen storage infrastructures.


Author(s):  
Chetna Choudhary ◽  
P. K. Kapur ◽  
A. K. Shrivastava ◽  
Sunil K. Khatri

Demand for highly reliable software is increasing day by day which in turn has increased the pressure on the software firms to provide reliable software in no time. Ensuring high reliability of the software can only be done by prolonged testing which in result consumes more resources which is not feasible in the existing market situation. To overcome this, software firms are providing patches after software release so as to fix the remaining number of bugs and to give better product experience to users. An update/fix is a minor portion of software to repair the bugs. With such patches, organizations enhance the performance of the software. Delivering patches after release demands extra effort and resources which are costly and hence not economical for the firms. Also, early patch release might cause improper fixation of bugs, on the other hand, delayed release may increase the chances of more failure during the operational phase. Therefore, determining optimal patch release time is imperative. To overcome the above issues we have formulated a two-dimensional time and effort-based cost model to regulate the optimum release and patch time of software, so that the total cost is minimized. Proposed model is validated on a real life data set.


2001 ◽  
Author(s):  
Ilyas Mohammed ◽  
Young-Gon Kim

Abstract It is well-known that the main cause of mechanical failure in electronic packages is due to the difference in the Coefficients of Thermal Expansion (CTE) of the silicon and the organic board. There are many packaging technologies that try to overcome this limitation; ranging from making curved connection pins (gull-wing leads) from the package to the board, as in the case of Thin Small Outline Package (TSOP), to using hard epoxy to rigidly adhere the die to the board as in the case of flip-chip packages. This paper illustrates a compliant packaging concept that minimizes the effect of the CTE mismatch between the silicon die and the board. A summary of different packaging techniques that address the CTE mismatch problem is presented. From this summary, it is apparent that many of these techniques do not provide as high reliability as the compliant packages do, especially when the electrical connections from the package to the board (solder balls) are present directly under the silicon die as in the case of chip scale packages. As the compliant package isolates the effect of the silicon die from the substrate, the silicon has some motion relative to the substrate. This means that the interconnections from the silicon to the substrate must be designed to withstand this motion. Hence the design of these interconnections is key to maximizing the reliability of the compliant packages. A detailed design and reliability analysis of compliant packages for different applications is presented. The design highlights the main parameters that have an effect on reliability of the package. Reliability simulation and analysis using finite element techniques is presented for different designs to highlight the key parameters that govern the reliability of compliant packages. Finally, reliability testing data is presented for different packages.


2007 ◽  
Vol 50 (2) ◽  
pp. 98-117 ◽  
Author(s):  
Milena Krasich

Traditionally, a reliability growth test was performed at the levels of various operational and environmental stresses, often at a level equal to that expected in use. Other than formal failure modes and effects analysis, reliability growth tests were often the only practical means for identification and mitigation of failure modes of a newly designed product. With the present high reliability requirements and long product useful life, the length of reliability growth tests may become cost and schedule prohibitive; therefore, accelerated testing is taking the place of prior testing at the use levels. This practice, however, does not address the dilemma of possibly unrealistically skewed test results highly dependent on the sequence of individually applied stresses as, unfortunately, it is often difficult to impossible to apply all of the environmental and operational stresses simultaneously. An example of this problem would be a case where the majority of failure modes in a product are a result of or are related to a specific stress, and this test was performed early in the program. These early failures would then produce a high growth rate and an incorrect estimate of the product achieved reliability if the analysis were done by standard analytical models. The test data also may be skewed in the opposite way, producing little or no reliability growth. This concern has been addressed as a serious caution in Edition 2 of the International Electrotechnical Commission (IEC) 61014 Programmes for reliability growth. This study shows how data analysis applied to an accelerated life test based on reliability growth methodology may produce a viable solution to the calculation problems. The stresses applied in this test are an accelerated application of most of the stresses expected to take place during product use. Each of the tests represents a lifetime exposure to an individual stress. If those stresses are applied individually and in sequence, they are considered to be equivalent to being applied in parallel with one another, as the duration of each stress is calculated to represent life of the product. Time to failure in each test is re-calculated to represent time to failure in real life. Failure occurrences are then sorted in their increasing order and analyzed using one of the reliability growth test analytical methods.


Author(s):  
J Zhang ◽  
F Gao ◽  
H Yu ◽  
X Zhao

This paper proposes a novel dual-input/single-output actuator unit, called a redundant and fault-tolerant actuator (RFTA), designed for heavy-load parallel manipulators. After the definitions of redundant and fault tolerance are given, the principle of the proposed RFTA is described using its two working processes. According to the derived kinematics, 12 fault modes caused by two different input velocities are developed and classified, and the physical meanings are represented correspondingly. Mechanical transmitting properties of the proposed RFTA are analysed respectively. On the other hand, RFTA, as a dual-module hot spare architecture to achieve high reliability and safety, with its complete redundant structure can be called Multiple Births Structure (MBS) and is different form the traditional partial redundant structures – Siamesed Births Structure (SBS). The reliability models show that RFTA is more reliable than SBS. Three guidelines – the select guideline, the design guideline, and the operation guideline – are suggested. Those design guidelines are useful to designers and users. A prototype of RFTA and its control system are developed and the relevant experiments are carried out. The experimental results demonstrate that RFTA is able to not only supply double driving force but also tolerate some local faults caused by out-of-syncs. RFTA provides heavy-duty equipment, especially large-scale parallel manipulators, with a new probability in changing their drive mode, from the hydraulic power supply to the motor drive. Furthermore, this paper also demonstrates that RFTA has some potential applicable prospects under heavy-duty environments, such as a large-scale parallel earthquake simulator and an electric press with heavy duty cycles.


Author(s):  
S. A. Timashev ◽  
A. V. Bushinskaya

Predictive maintenance (PdM) is the leading edge type of maintenance. Its principles are currently broadly used to maintain industrial assets [16]. Yet PdM is as yet not embraced by the pipeline industry. The paper describes a comprehensive practical risk based methodology of predictive maintenance of pipelines for different criteria of failure. For pipeline systems the main criterion is integrity. One of the main causes of loss of containment is pipe wall defects which grow in time. Any type of analysis of pipeline state (residual life time, probability of failure (POF), etc.,) is based on the sizes of discovered defects, which are assessed during the ILI or DA. In the developed methodology pipeline strength is assessed using one of the five internationally recognized design codes (the B31G, B31mod, DNV, Battelle, Shell 92). The pipeline POF is calculated by the comprehensive Gram-Charlier-Edgeworth method [14]. Having in mind that the repair actions are executed on particular cross-sections of the pipeline, the POF are calculated for each defect present in the pipeline. When calculating POFs, the defect sizes (depth, length and width), wall thickness and pipe diameter, SMYS of the pipe material, the radial and longitudinal corrosion rates, and operating pressure (OP) are considered random variables each distributed according to its PDF. In the proposed method of PdM of pipelines the remaining life time can be assessed using following criteria: POF = Qth; dd = 80%wt; SMOP = MAOP; ERF = MAOP/SMOP, if ERF ≥ 1, the pipeline needs immediate repair; dd = 100%wt. Here Qth is the ultimate permissible POF, dd is the depth of the most dangerous defect, wt is pipe wall thickness, SMOP is the maximal safe operating pressure SMOP = DF·Pf, MAOP is the Maximum Allowable Operating Pressure, Pf is the failure pressure, DF is the design factor (for B31Gmod DF = 1.39), ERF is the Estimated Repair Factor. The above criteria are arranged in descending order according to the growing level of their severity in time. The prediction of future sizes of growing defects and the pipeline remaining life time are obtained by using consistent assessments of their corrosion rates CRs. In the PdM methodology these CRs may be considered as deterministic, semi-probabilistic or fully stochastic values. Formulas are given for assessing the CRs using results of one ILI, two consecutive ILI, with or without verification measurements, and for the case when several independent types of measurements are used to assess the defect sizes. The paper describes results of implementation of the developed methodology on a real life pipeline. The time to reach each of the limit states given above was calculated, using results of two consecutive ILI divided by a three year interval. Knowledge of these arrival times permits minimizing the maintenance expenditures without creating any threats to its integrity and safety.


Joint Rail ◽  
2002 ◽  
Author(s):  
Patrick Ackroyd ◽  
Steven Angelo ◽  
Boris Nejikovsky ◽  
Jeffrey Stevens

Federal Track Safety Standards require daily measurements of car body and truck accelerations on trains operating at speeds above 125-MPH. In compliance with this requirement, twelve high-speed Acela coaches, operating in the Northeast Corridor between Boston, MA, and Washington DC, have been equipped with remote monitoring systems. The systems provide continuous measurement of car body and truck motions, detect various acceleration events, tag them with GPS time and location information, and deliver the data to Central Processing Stations through wireless communications channels. The Central Processing Stations installed at the National Railroad Passenger Corporation (Amtrak) and ENSCO, Inc., headquarters provide email and pager notifications to designated Amtrak officers and also make the data available to them over secure Intranet and Internet connections. The overall architecture has multiple levels of protection and redundancy in order to ensure high reliability and availability of the service. The systems have been in continuous operation for over a year and provided a multitude of valuable information. Examples of system-reported acceleration events include events caused by track irregularities and train handling. The paper also describes some of the real-life operational scenarios and situations that arise when autonomous remote monitoring systems are used, including wireless communications coverage issues, GPS location pitfalls, and maintenance issues.


Sign in / Sign up

Export Citation Format

Share Document