code maintenance
Recently Published Documents


TOTAL DOCUMENTS

23
(FIVE YEARS 8)

H-INDEX

5
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Maicon Faria ◽  
Mario Acosta ◽  
Miguel Castrillo ◽  
Stella V. Paronuzzi Ticco ◽  
Sergi Palomas ◽  
...  

<p>This work makes part of an effort to make NEMO capable of taking advantage of modern accelerators. To achieve this objective we focus on port routines in NEMO that have a small impact on code maintenance and the higher possible overall time footprint reductions. Our candidates to port were the diagnostic routines, specifically <em>diahsb</em> (heat, salt, volume budgets) and <em>diawri</em> (Ocean variables) diagnostics. These two diagnostics correspond to 5% of the NEMO's runtime each on our test cases. Both can be executed in an asynchronous fashion allowing overlap between diagnostic GPU and other NEMO routines CPU computations. <br>We report a methodology to port runtime diagnostics execution on NEMO to GPU using CUDA Fortran and OpenACC. Both synchronous and asynchronous are implemented on <em>diahsb</em> and <em>diawri</em> diagnostics. Associated time step and stream interleave are proposed to allow the overlap of CPU execution of NEMO and data communication between CPU, and GPU.<br><br>In the case of constraint computational resources and high-resolution grids, synchronous implementation of <em>diahsb</em> and <em>diawri</em> show up to 3.5x speed-up. With asynchronous implementation we achieve a higher speed-up from 2.7x to 5x with <em>diahsb</em> in the study cases. The results for this diagnostic optimization point out that the asynchronous approach is profitable even in the case where plenty of computational resources are available and the number of MPI ranks is in the threshold of parallel effectiveness for a given computational workload. For <em>diawri</em> on the other hand, the results of the asynchronous implementation depart from the <em>diahsb</em>. In the <em>diawri</em> diagnostic module there are 30 times more datasets demanding pinned memory to overlap communication between CPU and GPU with CPU execution. Pinned memory attribute limits data management of datasets allocated on main memory, therefore makes possible to the GPU access to main memory, overlapping CPU computation. The result is a scenario where the improvement from offloading the diagnostic computation impacts on NEMO CPU general execution. Our main hypothesis is that the amount of pinned memory used decreases the performance on runtime data management, this is confirmed by the 7% increase of the L3 data cache misses in the study case. Although the necessity of evaluating the amount of datasets needed for asynchronous communication on a diagnostic port, the payout of asynchronous diagnostic may be worth given the higher speed-up values that we can achieve with this technique. This work proves that models such as NEMO, developed only for CPU architectures, can port some of their computation to accelerators. Additionally, this work explains a successful and simple way to implement an asynchronous approach, where CPU and GPU are working in parallel, but without modifying the CPU code itself, since the diagnostics are extracted as kernels for the GPU and the CPU is yet working in the simulation.</p>


2020 ◽  
Author(s):  
Joseph R. Stinziano ◽  
Cassaundra Roback ◽  
Demi Gamble ◽  
Bridget K. Murphy ◽  
Patrick J. Hudson ◽  
...  

SummaryPlant physiological ecology is founded on a rich body of physical and chemical theory, but it is challenging to connect theory with data in unambiguous, analytically rigorous, and reproducible ways. Custom scripts written in computer programming languages (coding) enable plant ecophysiologists to model plant processes and fit models to data reproducibly using advanced statistical techniques. Since most ecophysiologists lack formal programming education, we have yet to adopt a unified set of coding principles and standards that could make coding easier to learn, use, and modify.We outline principles and standards for coding in plant ecophysiology to develop: 1) standardized nomenclature, 2) consistency in style, 3) increased modularity/extensibility for easier editing and understanding; 4) code scalability for application to large datasets, 5) documented contingencies for code maintenance; 6) documentation to facilitate user understanding; and 7) extensive tutorials for biologists new to coding to rapidly become proficient with software.We illustrate these principles using a new R package, {photosynthesis}, designed to provide a set of analytical tools for plant ecophysiology.Our goal with these principles is to future-proof coding efforts to ensure new advances and analytical tools can be rapidly incorporated into the field, while ensuring software maintenance across scientific generations.


2020 ◽  
Vol XXIII (1) ◽  
pp. 166-170
Author(s):  
Ladislav Stazić

According to the International Safety Management Code, every cargo ships of 500 gross tons and over must establish Ship Safety Management System that is documented in a Safety Management Manual. This document is prepared by the Company and contains rules regarding maintenance of the equipment, particularly recognized critical equipment. Those rules must be included into ship’s Planned Maintenance System, another requirement of the ISM Code. Maintenance periods in rules are based on several factors, while the Company experience in the operation and maintenance of the ship and its machinery and equipment is emphasized and analyzed in this paper. Analysis of records from different Computerized Planned Maintenance Systems obtained from several companies show that each company implemented their experience in the system. At the same time, the results show that very few modification of machinery maintenance intervals have been made with the proactive approach based on acquired experience and new knowledge.


2020 ◽  
Author(s):  
Aristotelis Leventidis ◽  
Jiahui Zhang ◽  
Cody Dunne ◽  
Wolfgang Gatterbauer ◽  
H.V. Jagadish ◽  
...  

Understanding the meaning of existing SQL queries is critical for code maintenance and reuse. Yet SQL can be hard to read, even for expert users or the original creator of a query. We conjecture that it is possible to capture the logical intent of queries in automatically-generated visual diagrams that can help users understand the meaning of queries faster and more accurately than SQL text alone.We present initial steps in that direction with visual diagrams that are based on the first-order logic foundation of SQL and can capture the meaning of deeply nested queries. Our diagrams build upon a rich history of diagrammatic reasoning systems in logic and were designed using a large body of human-computer interaction best practices: they are minimal in that no visual element is superfluous; they are unambiguous in that no two queries with different semantics map to the same visualization; and they extend previously existing visual representations of relational schemata and conjunctive queries in a natural way. An experimental evaluation involving 42 users on Amazon Mechanical Turk shows that with only a 2--3 minute static tutorial, participants could interpret queries meaningfully faster with our diagrams than when reading SQL alone. Moreover, we have evidence that our visual diagrams result in participants making fewer errors than with SQL. We believe that more regular exposure to diagrammatic representations of SQL can give rise to a pattern-based and thus more intuitive use and re-use of SQL.A full version of this paper with all appendices and supplemental material for the experimental study (stimuli, raw data, and analysis code) are available at https://osf.io/btszh


The paper presents measuring various code smells by finding critical code smells and thereby concentration is increased in those parts through Structural Modeling for arranging those code smells. Arranging the code smells in the way that they will not produce a new smell on their detection and removal is very necessary. Structural modeling helps in clarifying Interrelationship among these code smells. The code smells that contains high driving effects are ordered as optimized code which resulted in the increase in the overall code maintenance of the software code which will be used afterwards for achieving the concept of re-usability. In addition to this we have added a technique for restructuring technology for the purpose to achieve high accuracy. It involves more objectives related to the performance and the code smells are implemented with the concept called as pairwise analysis based on the priority method. Pairwise analysis based on the weights attained by the bad smells provides a better optimized results since more problematic areas are neglected here. This work gives optimized results for the process of overall code maintainability by applying restructuring before the refactoring process with Fuzzy technique and it is followed by finding the code smells which results in high ripple effects and then removing them. Still more research ideologies are needed for removing the bad smells in the code.


Author(s):  
Jie Wen ◽  
Robert Keating ◽  
Timothy M. Adams

Abstract ASME Boiler Pressure Vessel Code, Section III, Division 1, Subsection NC, Class 2 components, and Subsection ND, Class 3 components, have significant technical and administrative similarities. The ASME BPV III Standards Committee has a long-standing goal of combining these two subsections (NC and ND). Consolidating Subsections NC and ND will simplify, reduce repetitions and make the Code easier to use. Additionally, a combined Subsection NC/ND will simplify Code maintenance. To facilitate this consolidation, the Subgroup on Component Design, under the BPV III Standards Committee assigned a Task Group to develop a strategy to combine the two subsections into a single subsection while maintaining both Class 2 and Class 3 as separate classes of construction. Both Subsections NC and ND of the Code have been completely reviewed, compared and the technical bases for the differences have been established. The conclusion of this review is that there are only a few major technical differences between the two code class rules; however, there are a significant number of editorial differences. Based on the review, the Task Group developed a strategy that completes the consolidation within two publishing cycles of Code edition. For the Code Edition 2019, two separate Subsections NC and ND books will be published to resolve editorial differences and otherwise align the two subsections. For the Code Edition 2021, a single merged subsection will be published. This paper provides the background for the proposed code change, discusses the detailed result of the NC/ND comparison, and provides the basis for the major technical differences. The paper will also update the status of the project and code actions needed to consolidate to a single subsection.


2019 ◽  
Vol 214 ◽  
pp. 05036
Author(s):  
Rémi Ete ◽  
Antoine Pingault

Data quality monitoring is the first step in the certification of data recorded for offline physics analyses. Many experiments have developed their own dedicated monitoring system in the past. Most of them rely on their own event data model, which leads to a strong dependency on the data format and storage. We present here a generic data quality monitoring system, DQM4hep, that has been developed without any assumptions on the underlying event data model. This reduces the code maintenance and increases the portability and reusability across other experiments. We first introduce the framework architecture and the various core components as well as tools provided by the software package. We then give an overview of the different experiments using DQM4hep and the foreseen integration in future other experiments. We finally present the ongoing and future software development for DQM4Hhep and long-term prospects.


Sign in / Sign up

Export Citation Format

Share Document