scholarly journals Advantage of in Service Condition Based Assessment for Transformers in enhancing the maintenance strategy

2020 ◽  
Vol 69 (3) ◽  
Author(s):  
Ibrahim Al Balushi

Controlling the maintenance OPEX is one of the major challenges that any utility faces. The challenges lie in how to optimize the three main factors: risk, performance, and cost. Besides, no utility can depend on a unique type of maintenance, there is always a combination of a different kind of maintenance such as breakdown, preventive, risk-based, conditionbased,..etc. So, what is the answer to this question: what type of maintenance needs to be followed to keep the transformer in service in with high performance? There is no specific answer to this question. Each type of maintenance can be applied based on the transformer`s operating environment. However, most of the utilities apply preventive and condition-based maintenance. To justify this answer, some data need to be analyzed to assess the maintenance performance and recommend what are enhancement need to be added. One of these approaches is to apply in service condition-based assessment to study the health of the assets based on the current maintenance practice. Furthermore, study both historical maintenance recordsand failure rates will help to understand the relationship between the effectiveness of maintenance and service efficiency.This relation can come in two shapes. One is to do the right things by developing a set of maintenance activities that need to be performed during the maintenance to ensure its effectiveness. Second, is to do things right by enhancing the maintenance crew capabilities and competencies to ensure high efficiency. After analyzing all these factors mentioned above, It has been noticed that in-service condition-based assessment of the transformer is a powerful tool that can be used to enhance and build an effective strategy. It will not only involve a set of activities during the maintenance, but it also covers the whole life cycle of the transformer. Besides, it highlights the gaps in the maintenance process and procedures, and provide indications where enhancement need to be applied based on international practice. These changes were observed on the cost and performance in the benchmarking study that was done through International Transmission Operation and Maintenance Study (ITOMS) which was a good indication of the effectiveness of strategy used for transformers. However, as part of the asset management approach, continuous improvement will continue to reach the vision that has been set in the maintenance optimization and to prepare for the future significant increase in transformer aging.

Author(s):  
Javier Conejero ◽  
Sandra Corella ◽  
Rosa M Badia ◽  
Jesus Labarta

Task-based programming has proven to be a suitable model for high-performance computing (HPC) applications. Different implementations have been good demonstrators of this fact and have promoted the acceptance of task-based programming in the OpenMP standard. Furthermore, in recent years, Apache Spark has gained wide popularity in business and research environments as a programming model for addressing emerging big data problems. COMP Superscalar (COMPSs) is a task-based environment that tackles distributed computing (including Clouds) and is a good alternative for a task-based programming model for big data applications. This article describes why we consider that task-based programming models are a good approach for big data applications. The article includes a comparison of Spark and COMPSs in terms of architecture, programming model, and performance. It focuses on the differences that both frameworks have in structural terms, on their programmability interface, and in terms of their efficiency by means of three widely known benchmarking kernels: Wordcount, Kmeans, and Terasort. These kernels enable the evaluation of the more important functionalities of both programming models and analyze different work flows and conditions. The main results achieved from this comparison are (1) COMPSs is able to extract the inherent parallelism from the user code with minimal coding effort as opposed to Spark, which requires the existing algorithms to be adapted and rewritten by explicitly using their predefined functions, (2) it is an improvement in terms of performance when compared with Spark, and (3) COMPSs has shown to scale better than Spark in most cases. Finally, we discuss the advantages and disadvantages of both frameworks, highlighting the differences that make them unique, thereby helping to choose the right framework for each particular objective.


2016 ◽  
Vol 2016 (DPC) ◽  
pp. 000324-000341 ◽  
Author(s):  
Chet Palesko ◽  
Amy Palesko

2.5D and 3D packaging can provide significant size and performance advantages over other packaging technologies. However, these advantages usually come at a high price. Since 2.5D and 3D packaging costs are significant, today they are only used if no other option can meet the product requirements, and most of these applications are relatively low volume. Products such as high end FPGAs, high performance GPUs, and high bandwidth memory are great applications but none have volume requirements close to mobile phones or tablets. Without the benefit of volume production, the cost of 2.5D and 3D packaging could stay high for a long time. In this paper, we will provide cost model results of a complete 2.5D and 3D manufacturing process. Each manufacturing activity will be included and the key cost drivers will be analyzed regarding future cost reductions. Expensive activities that are well down the learning curve (RDL creation, CMP, etc.) will probably not change much in the future. However, expensive activities that are new to this process (DRIE, temporary bond/debond, etc.) provide good opportunities for cost reduction. A variety of scenarios will be included to understand how design characteristics impact the cost. Understanding how and why the dominant cost components will change over time is critical to accurately predicting the future cost of 2.5D and 3D packaging.


2017 ◽  
Vol 20 (4) ◽  
pp. 1151-1159 ◽  
Author(s):  
Folker Meyer ◽  
Saurabh Bagchi ◽  
Somali Chaterji ◽  
Wolfgang Gerlach ◽  
Ananth Grama ◽  
...  

Abstract As technologies change, MG-RAST is adapting. Newly available software is being included to improve accuracy and performance. As a computational service constantly running large volume scientific workflows, MG-RAST is the right location to perform benchmarking and implement algorithmic or platform improvements, in many cases involving trade-offs between specificity, sensitivity and run-time cost. The work in [Glass EM, Dribinsky Y, Yilmaz P, et al. ISME J 2014;8:1–3] is an example; we use existing well-studied data sets as gold standards representing different environments and different technologies to evaluate any changes to the pipeline. Currently, we use well-understood data sets in MG-RAST as platform for benchmarking. The use of artificial data sets for pipeline performance optimization has not added value, as these data sets are not presenting the same challenges as real-world data sets. In addition, the MG-RAST team welcomes suggestions for improvements of the workflow. We are currently working on versions 4.02 and 4.1, both of which contain significant input from the community and our partners that will enable double barcoding, stronger inferences supported by longer-read technologies, and will increase throughput while maintaining sensitivity by using Diamond and SortMeRNA. On the technical platform side, the MG-RAST team intends to support the Common Workflow Language as a standard to specify bioinformatics workflows, both to facilitate development and efficient high-performance implementation of the community’s data analysis tasks.


2021 ◽  
Author(s):  
Sabir Hussain ◽  
Ghulam Jaffer

Abstract The need for broadband data has increased speedily but in underserved rural areas, the mobile connectivity of 3G and LTE is still a significant challenge. By looking at the historical trend, the data traffic and the internet are still expected to grow in these areas [1]. The next generation of satellites is trying to decrease the cost per MB having the advantage of higher throughput and availability. To maintain the performance of the link, choosing an appropriate frequency is evident. A multi-beam satellite system can fulfill the demand and performance over a coverage area. The high throughput satellites (HTS) fulfill this requirement using C and Ku bands. In this paper, we present the benefits of using Ku-band on the user site and the composite of C and Ku bands on the gateway site. This configuration has proved to be a cost-efficient solution with high performance over the traditional straight configuration. The data rate is improved five times both on upstream and downstream as compared to the existing available FSS system. Moreover, it has got an advantage to Ku-band user that they would enjoy the significant improvement in the performance without upgrading their systems.


Author(s):  
Adil Iguider ◽  
Oussama Elissati ◽  
Abdeslam En-Nouaary ◽  
Mouhcine Chami

Smart systems are becoming more present in every aspect of our daily lives. The main component of such systems is an embedded system; this latter assures the collection, the treatment, and the transmission of the accurate information in the right time and for the right component. Modern embedded systems are facing several challenges; the objective is to design a system with high performance and to decrease the cost and the development time. Consequently, some robust methodologies like the Codesign were developed to fulfill those requirements. The most important step of the Codesign is the partitioning of the systems' functionalities between a hardware set and a software set. This article deals with this problem and uses a heuristic approach based on shortest path optimizations to solve the problem. The aim is to minimize the total hardware area and to respect a constraint on the overall execution time of the system. Experiments results demonstrate that the proposed method is very fast and gives better results compared to the genetic algorithm.


Science ◽  
2021 ◽  
Vol 372 (6545) ◽  
pp. eabg1487
Author(s):  
Dongdong Gu ◽  
Xinyu Shi ◽  
Reinhart Poprawe ◽  
David L. Bourell ◽  
Rossitza Setchi ◽  
...  

Laser-metal additive manufacturing capabilities have advanced from single-material printing to multimaterial/multifunctional design and manufacturing. Material-structure-performance integrated additive manufacturing (MSPI-AM) represents a path toward the integral manufacturing of end-use components with innovative structures and multimaterial layouts to meet the increasing demand from industries such as aviation, aerospace, automobile manufacturing, and energy production. We highlight two methodological ideas for MSPI-AM—“the right materials printed in the right positions” and “unique structures printed for unique functions”—to realize major improvements in performance and function. We establish how cross-scale mechanisms to coordinate nano/microscale material development, mesoscale process monitoring, and macroscale structure and performance control can be used proactively to achieve high performance with multifunctionality. MSPI-AM exemplifies the revolution of design and manufacturing strategies for AM and its technological enhancement and sustainable development.


Author(s):  
Maher A. El-Masri

Intercooled/Recuperated gas turbine systems provide high-efficiency and power density for naval propulsion. Current aero-derivative systems are capable of about 43% thermal efficiency in this configuration. With continued progress in gas-turbine materials and cooling technology, the possibility of further improving system performance by incorporation of gas-turbine reheat arises. A preliminary scan of this class of cycles is presented and compared with non-reheat intercooled/recuperated cycles at two levels of component technology. For conservative component technology, the reheat is found to provide very modest performance advantages. With advanced components and ceramic thermal barrier coatings, the reheat is found to offer potential for specific power improvements of up to 33% and for modest efficiency gains, on the order of one percentage point, while enabling turbine inlet temperatures well below those for the most efficient non-reheat cycles. The high-performance reheat systems, however, require reheat-combustor inlet temperatures beyond current practice. The use of water-injection in the intercooler, together with an aftercooler and a water-injected evaporative-recuperator is found to produce very large gains in efficiency as well as specific power. This modification may be feasible for land-based systems, where it can compete favourably with combined cycles. Despite the difficulty of obtaining pure water for a shipboard propulsion system, those large gains may justify further studies of this system and of means to provide its water supply in marine applications.


Molecules ◽  
2021 ◽  
Vol 26 (18) ◽  
pp. 5438
Author(s):  
Danijela S. Kretić ◽  
Ivana S. Veljković ◽  
Aleksandra B. Đunović ◽  
Dušan Ž. Veljković

The existence of areas of strongly positive electrostatic potential in the central regions of the molecular surface of high-energy molecules is a strong indicator that these compounds are very sensitive towards detonation. Development of high-energy compounds with reduced sensitivity towards detonation and high efficiency is hard to achieve since the energetic molecules with high performance are usually very sensitive. Here we used Density Functional Theory (DFT) calculations to study a series of bis(acetylacetonato) and nitro-bis(acetylacetonato) complexes and to elucidate their potential application as energy compounds with moderate sensitivities. We calculated electrostatic potential maps for these molecules and analyzed values of positive potential in the central portions of molecular surfaces in the context of their sensitivity towards detonation. Results of the analysis of the electrostatic potential demonstrated that nitro-bis(acetylacetonato) complexes of Cu and Zn have similar values of electrostatic potential in the central regions (25.25 and 25.06 kcal/mol, respectively) as conventional explosives like TNT (23.76 kcal/mol). Results of analysis of electrostatic potentials and bond dissociation energies for the C-NO2 bond indicate that nitro-bis(acetylacetonato) complexes could be used as potential energetic compounds with satisfactory sensitivity and performance.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Jianyu Chen ◽  
Xiaofeng Wang ◽  
Leiting Tao ◽  
Yuan Liu

Currently, the emergence of edge computing provides low-latency and high-efficiency computing for the Internet of Things (IoT). However, new architectures, protocols, and security technologies of edge computing need to be verified and evaluated before use. Since network emulation based on a cloud platform has advantages in scalability and fidelity, it can provide an effective network environment for verifying and evaluating new edge computing technologies. Therefore, we propose a high-performance emulation technology supporting the routing protocol based on a cloud platform. First, we take OpenStack as a basic network environment. To improve the performance and scalability of routing emulation, we then design the routing emulation architecture according to the software-defined network (SDN) and design the cluster scheduling mechanism. Finally, the design of the Open Shortest Path First (OSPF) protocol can support communication with physical routers. Through extensive experiments, we demonstrate that this technology not only can provide a realistic OSPF protocol but also has obvious advantages in the overhead and performance of routing nodes compared with those of other network emulation technologies. Furthermore, the realization of the controller cluster improves the scalability in the emulation scale.


2020 ◽  
Vol 18 (Suppl.1) ◽  
pp. 625-629
Author(s):  
G. Ganeva

The article focuses on the management approach in the assessment and analysis of deviations in operating enterprises and technically explains the methodology for calculating deviations. The types of cost deviations and their monitoring in enterprises are determined, as a deviation of the costs for direct materials, a deviation of the costs for direct labor, etc. Attention is given to the main reasons for cost deviations such as the cost of materials, downtime, labor efficiency and more. PURPOSE: The aim of the article is to identify and analyze existing technical and methodological approaches for calculating deviations. METHODS: Тhe systemic and structural approach, the analysis and the synthesis, including the study of literature sources. RESULTS: The contributions are about choosing the right tools for determining and assessing deviations and the reasons for them in the analysis with specific digital information. CONCLUSION: It is found that controlling as part of cost management and analysis of variances helps to identify specific measures and eliminate errors through them.


Sign in / Sign up

Export Citation Format

Share Document