mean time between failures
Recently Published Documents


TOTAL DOCUMENTS

126
(FIVE YEARS 37)

H-INDEX

7
(FIVE YEARS 1)

Author(s):  
Prof. Sachin N. Patil

Abstract: When minutes of down-time can negatively impact the bottom line of a business, it is crucial that the physical infrastructure supporting be reliable. The equipment reliability can be achieved with a solid understanding of mean time between failures. Mean time between failures (MTBF) has been used for years as a basis for various maintenance decisions supported by various methods and procedures for lifecycle predictions. To quantifying a maintainable system or reliability we can use MTBF. For developing the mean time between failures model we can use make use of Poisson distribution, Weibull model and Bayesian model. In this paper we will be talking about complexities and misconceptions of MTBF and clarify criteria that need to be consider in estimating MTBF in a sequential manner. This paper sheds light on MTBF using examples throughout in an effort to simplify complexity. Keywords: MTBF, Two Tandem Mill, Sugar Mill, Reliability, Maintenance


2022 ◽  
Vol 10 (1) ◽  
pp. 0-0

Software failure prediction is an important activity during agile software development as it can help managers to identify the failure modules. Thus, it can reduce the test time, cost and assign testing resources efficiently. RapidMiner Studio9.4 has been used to perform all the required steps from preparing the primary data to visualizing the results and evaluating the outputs, as well as verifying and improving them in a unified environment. Two datasets are used in this work, the results for the first one indicate that the percentage of failure to predict the time used in the test is for all 181 rows, for all test times recorded, is 3% for Mean time between failures (MTBF). Whereas, SVM achieved a 97% success in predicting compared to previous work whose results indicated that the use of Administrative Delay Time (ADT) achieved a statistically significant overall success rate of 93.5%. At the same time, the second dataset result indicates that the percentage of failure to predict the time used is 1.5% for MTBF, SVM achieved 98.5% prediction.


Author(s):  
Emmanuel Agullo ◽  
Mirco Altenbernd ◽  
Hartwig Anzt ◽  
Leonardo Bautista-Gomez ◽  
Tommaso Benacchio ◽  
...  

This work is based on the seminar titled ‘Resiliency in Numerical Algorithm Design for Extreme Scale Simulations’ held March 1–6, 2020, at Schloss Dagstuhl, that was attended by all the authors. Advanced supercomputing is characterized by very high computation speeds at the cost of involving an enormous amount of resources and costs. A typical large-scale computation running for 48 h on a system consuming 20 MW, as predicted for exascale systems, would consume a million kWh, corresponding to about 100k Euro in energy cost for executing 1023 floating-point operations. It is clearly unacceptable to lose the whole computation if any of the several million parallel processes fails during the execution. Moreover, if a single operation suffers from a bit-flip error, should the whole computation be declared invalid? What about the notion of reproducibility itself: should this core paradigm of science be revised and refined for results that are obtained by large-scale simulation? Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? While the analysis of use cases can help understand the particular reliability requirements, the construction of remedies is currently wide open. One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. These ideas constituted an essential topic of the seminar. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge. This article gathers a broad range of perspectives on the role of algorithms, applications and systems in achieving resilience for extreme scale simulations. The ultimate goal is to spark novel ideas and encourage the development of concrete solutions for achieving such resilience holistically.


2021 ◽  
Vol 4 (2) ◽  
pp. 19-30
Author(s):  
Madson do Nascimento Araújo ◽  
Josias Guimarães Batista ◽  
André Pimentel Moreira ◽  
Danielle Alves Barbosa ◽  
Linconl Lobo Da Silva

With the advancement of technology in all fields, mainly in the industrial sector, which is manifested in more modern and self-contained equipment. Many equipment in the industry are obsolete with the electrical and electronic parts, but the mechanical parts are in perfect condition and can often be used for a long time. This paper demonstrates the retrofitting of a fabric finishing machine (Sanforizadeira) in the textile process thet was obsolete. In these conditions, it became necessary to retrofit the automation system. Through the study of the machine and the process, involving the fields of production and mechanical maintenance, the critical points and the improvements that should be implemented were found, thus raising the materials needed to carry out the work. The automation system was modernized, the control panels were replaced and improvements were implemented. The final result was achieved through the objectives outlined in this work, guaranteeing the company a system with considerable improvements, with a reduction in the number of stops and time spent for maintenance. The results were demonstrated by the graphs of the maintenance indicators. Comparing the before and after retrofitting, the following indicators were analyzed for six months: Failure rate, with an average reduction of 50\% in the failure rate per hour; Mean time between failures, showed an average increase of 140 hours in the prediction of the next failures; Average time to repair, presented an average reduction of thirteen minutes in the resolution of the failures; Availability, there was a 16\% increase in the availability of the machine, leaving the indicator above 90\% where the ideal is 100\%. Finally, the total cost of the project represented only 9.5\% of the total value of a new machine, being also considered a positive result for the work.


2021 ◽  
Author(s):  
Oki Maulidani ◽  
Christian Bonilla ◽  
Monica Paredes ◽  
Pedro Escalona ◽  
Jorge Villalobos ◽  
...  

Abstract Electrical submersible pump (ESP) is the main artificial lift system in Shushufindi field. These systems besides facing high gas production, high scale and corrosion tendencies, also have to deal with surface fluid handling and electrical power limitations which combined impose challenges to optimize the ESP system. In perspective, the digitalization initiative has been key to integrate data in order to have a big picture of the actual field condition and ultimately to enhance oil production. Various dashboards have been created using the business intelligence tool to provide real time information. ESP dashboard shows opportunities to optimize the ESP unit by integrating real time and manual entry data to optimize frequency, surface equipment, opportunities for pump upsizing, and re-designing the ESP downhole equipment. The result of this analysis is derived from ESP simulation, nodal analysis, chemical treatment monitoring and real time surveillance of the ESP parameters. Dashboards of water handling, electrical power, and chemical treatment are utilized to support process analysis providing current field status, with also the feedback from operational and engineering recommendations. Comprehensive real time monitoring resulted in average of 500 bopd less production deferment in the last 12 months as the result of early detection and a proper operational optimization (chemical treatment, gas flaring, and choke optimization) of the unstable wells. Strategic decisions have been executed to ensure the availability of water handling capacity and electrical power for each production station such as stimulating disposal wells, cleaning injection flowlines, and repairing power generations. Up to 3,000 bopd total incremental has been generated in the last 12 months as the result of 17 upsizing operations, optimizing frequency in 68 wells, and optimizing surface equipment in 35 wells. The associated mean time between failures (MTBF) of ESP system has increased over the time from 224 days in 2013 to 674 days in 2020. Digitalization is a game changer for optimizing the oilfield production and to reduce associated operation risks from features as of real time surveillance, EDGE computing, remote actuation, and big data intelligence. This paper will elaborate in detail on how digitalization can be valuable in optimizing ESP system with a successful case study in Shushufindi field.


2021 ◽  
Vol 249 ◽  
pp. 408-416
Author(s):  
Ivan Bogdanov ◽  
Boris Abramovich

In accordance with the Energy Strategy until 2035, the possibility of increasing the efficiency of energy use of secondary energy resources in the form of associated oil and waste gases has been substantiated by increasing the energy efficiency of the primary energy carrier to 90-95 % by means of cogeneration plants with a binary cycle of electricity generation and trigeneration systems with using the energy of the waste gas to cool the air flow at the inlet of gas turbine plants. The conditions for maintaining the rated power of the main generator with variations in the ambient temperature are shown. An effective topology of electrical complexes in a multi-connected power supply system of oil and gas enterprises according to the reliability condition is presented, which allows increasing the availability factor by 0.6 %, mean time between failures by 33 %, the probability of failure-free operation by 15 % and reducing the mean time of system recovery by 40 %. The article considers the use of parallel active filters to improve the quality of electricity and reduce voltage drops to 0.1 s when used in autonomous electrical complexes of oil and gas enterprises. The possibility of providing uninterrupted power supply when using thyristor systems for automatic reserve input has been proven. A comparative analysis was carried out to assess the effect of parallel active filters and thyristor systems of automatic transfer of reserve on the main indicators of the reliability of power supply systems of oil and gas enterprises.


2021 ◽  
Vol 15 (3) ◽  
pp. 16-21
Author(s):  
Vlad Alexandru Florea ◽  
Dragos Pasculescu ◽  
Vlad Mihai Pasculescu

Purpose.The aim of the study is to determine and analyse causes of faults in the operation of TR-7A scraper conveyor and to estimate the required time for their remediation and select the methods of their prevention and elimination. Methods. The characteristic of a system, such as the scraper conveyor, intended to fulfil its specified function in time and operation conditions, can be studied, theoretically, by determining its operational reliability. This implies the existence of a framework that incorporates several interconnected components of technical, operational, commercial and management nature. The quantitative expression of reliability was based on elements of mathematical probability theory and statistics (exponential distribution law), failure and repair mechanism not being subject to certain laws. Findings. The following TR-7A subassemblies, if defective, could have been the cause of a failure: chains, hydraulic couplings, chain lifters, drive, return drums, some electrical equipment. After 28 months of monitoring the TR-7A operation, we have established the number of failures (defects) ni, the operating time between failures ti, frequency of failures fc, time to repair tri, weight repair time pr, mean time between failures (MTBF), mean time to repair (MTR). Originality.Data collection and processing involves the adoption of specific procedures to allow the correct highlighting of the causes and frequency of failures. The accomplishment of this approach allowed finding the solutions for increasing reliability of some subassemblies of TR-7A conveyor (i.e., those subjected to abrasive wear). Practical implications.One solution was to use materials with compositional and functional gradient in the case of worn surfaces of some subassemblies. It was successfully applied for the chain lifters where a significant increase in the mean time between failures was obtained. The field of application of these materials can be extended to the metal subassemblies of machines and equipment with abrasion wear that occurs both in underground mines and in quarries.


Nafta-Gaz ◽  
2021 ◽  
Vol 77 (9) ◽  
pp. 571-578
Author(s):  
Beyali Ahmedov ◽  
◽  
Anar Hajiyev ◽  
Vugar Mustafayev ◽  
◽  
...  

The article presents the results of experimental studies to assess the loading and balancing of a new constructive solution of beamless sucker-rod pumping units. It is noted that the key factor that has the most significant effect on the mean time between failures (MTBF) is the right balancing of the pumping unit. The main purpose of the balancing device is the accumulation of potential energy during the downstroke and its release during the upstroke of the rod. It has been proved that the proposed additional balancing system (movable counterweight) which helps to reduce the uneven load on the electric motor and the power consumption of the pumping unit will also increase the efficiency of the beamless sucker-rod pumping unit. It was found that losses in sucker-rod pumps depend on the degree of balance of the counterweights. If the unbalance coefficient of the equipment is in the range from –5 to +5%, then the power loss due to unbalance can be ignored. In the current article, the authors propose a technique that allows to determine the energy characteristics of the electric drive of the pumping unit under conditions of a cyclically changing load and insufficient balance. It was revealed that when the balancer head passes from the upstroke to the downstroke and vice versa, there are sections with a negative value of the torque, which is explained by the influence of the inertial forces of the moving masses. This leads to shocks in the gearing of the reducer at the extreme positions of the cranks, increased wear and possibly to breakage of the teeth. Since it is not possible to completely eliminate this phenomenon, one should strive to limit the value of the negative torque by the correct balancing of the sucker-rod pump. In all cases, the change in the operating mode of a new constructive solution of beamless pumping unit requires new calculations, and requires changing the position and weights of movable and rotary counterweights (with combined balancing).


Buildings ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 156
Author(s):  
Deniz Besiktepe ◽  
Mehmet E. Ozbek ◽  
Rebecca A. Atadero

Condition information is essential to develop effective facility management (FM) strategies. Visual inspections and walk-through surveys are common practices of condition assessment (CA), generally resulting in qualitative and subjective outcomes such as “poor”, “good”, etc. Furthermore, limited resources of the FM process demand that CA practices be efficient. Given these, the purpose of this study is to develop a resource efficient quantitative CA framework that can be less subjective in establishing a condition rating. The condition variables of the study—mean time between failures, age-based obsolescence, facility condition index, occupant feedback, and preventive maintenance cycle—are identified through different sources, such as a computerized maintenance management system, expert opinions, occupants, and industry standards. These variables provide proxy measures for determining the condition of equipment with the implementation example for heating, ventilating, and air conditioning equipment. Fuzzy sets theory is utilized to obtain a quantitative condition rating while minimizing subjectivity, as fuzzy sets theory deals with imprecise, uncertain, and ambiguous judgments with membership relations. The proposed CA framework does not require additional resources, and the obtained condition rating value supports decision-making for building maintenance management and strategic planning in FM, with a comprehensive and less subjective understanding of condition.


2021 ◽  
Vol 11 (7) ◽  
pp. 3127
Author(s):  
Angelo Lerro ◽  
Manuela Battipede

This work deals with the safety analysis of an air data system (ADS) partially based on synthetic sensors. The ADS is designed for the small aircraft transportation (SAT) community and is suitable for future unmanned aerial vehicles and urban air mobility applications. The ADS’s main innovation is based on estimation of the flow angles (angle-of-attack and angle-of-sideslip) using synthetic sensors instead of classical vanes (or sensors), whereas pressure and temperature are directly measured with Pitot and temperature probes. As the air data system is a safety-critical system, safety analyses are performed and the results are compared with the safety objectives required by the aircraft integrator. The present paper introduces the common aeronautical procedures for system safety assessment applied to a safety critical system partially based on synthetic sensors. The mean time between failures of ADS’s sub-parts are estimated on a statistical basis in order to evaluate the failure rate of the ADS’s functions. The proposed safety analysis is also useful in identifying the most critical air data system parts and sub-parts. Possible technological gaps to be filled to achieve the airworthiness safety objectives with nonredundant architectures are also identified.


Sign in / Sign up

Export Citation Format

Share Document