scholarly journals Finding parallel patterns through static analysis in C++ applications

Author(s):  
David del Rio Astorga ◽  
Manuel F Dolz ◽  
Luis Miguel Sánchez ◽  
J Daniel García ◽  
Marco Danelutto ◽  
...  

Since the ‘free lunch’ of processor performance is over, parallelism has become the new trend in hardware and architecture design. However, parallel resources deployed in data centers are underused in many cases, given that sequential programming is still deeply rooted in current software development. To address this problem, new methodologies and techniques for parallel programming have been progressively developed. For instance, parallel frameworks, offering programming patterns, allow expressing concurrency in applications to better exploit parallel hardware. Nevertheless, a large portion of production software, from a broad range of scientific and industrial areas, is still developed sequentially. Considering that these software modules contain thousands, or even millions, of lines of code, an extremely large amount of effort is needed to identify parallel regions. To pave the way in this area, this paper presents Parallel Pattern Analyzer Tool, a software component that aids the discovery and annotation of parallel patterns in source codes. This tool simplifies the transformation of sequential source code to parallel. Specifically, we provide support for identifying Map, Farm, and Pipeline parallel patterns and evaluate the quality of the detection for a set of different C++ applications.

Author(s):  
Tran Thanh Luong ◽  
Le My Canh

JavaScript has become more and more popular in recent years because its wealthy features as being dynamic, interpreted and object-oriented with first-class functions. Furthermore, JavaScript is designed with event-driven and I/O non-blocking model that boosts the performance of overall application especially in the case of Node.js. To take advantage of these characteristics, many design patterns that implement asynchronous programming for JavaScript were proposed. However, choosing a right pattern and implementing a good asynchronous source code is a challenge and thus easily lead into less robust application and low quality source code. Extended from our previous works on exception handling code smells in JavaScript and exception handling code smells in JavaScript asynchronous programming with promise, this research aims at studying the impact of three JavaScript asynchronous programming patterns on quality of source code and application.


Author(s):  
Don Robertson ◽  
Wayne Russell ◽  
Nigel Alvares ◽  
Debra Carrobourg ◽  
Graeme King

A strategic combination of integrity software, relational databases, GIS, and GPS technologies reduced costs and increased quality of a comprehensive pipeline integrity assessment and repair program that Greenpipe Industries Ltd. completed recently on three crude oil pipelines—two 6-inch and one 8-inch—for Enbridge Pipelines (Saskatchewan) Inc. Greenpipe analyzed metal loss data from recent in-line inspection logs, calculated real-world coordinates of defects and reference welds, prioritized anomalies for repair taking environmental risks into account, and prepared detailed dig sheets and site maps using PipeCraft™, Greenpipe’s advanced GIS-based pipeline integrity-maintenance software package. GPS technology was used to navigate to dig sites and the accuracy of the GPS approach was compared with traditional chainage methods. Pipelines were purged and all defects were cut out and replaced by new pipe during a two-day shutdown on each pipeline. A comprehensive set of data, including high-accuracy GPS location of anomalies, reference welds, and replacement pipe welds, was collected at each dig site and entered into the PipeCraft relational database. After all repairs were completed, the client was provided with a GIS-based electronic final report, allowing point-and-click access to all data collected in the field, including in-line inspection logs, dig information sheets and as-built drawings. The new methodologies employed on this project resulted in a high quality, comprehensive and cost-effective integrity maintenance program.


Author(s):  
Joannes Gullaksen

Abstract Development of software application for subsea engineering design and analysis is to a large extent based on codes and standards used in the offshore industry when considering subsea pipelines. In this paper a software is described which main purpose is to facilitate the design and analysis process and such that results and documentation are automatically generated to increase quality of documentation. Current scope is a standard calculation tool covering different aspects of design in compliance with relevant offshore codes. A modularization technique is used to divide the software system into multiple discrete and independent modules based on offshore codes, which are capable of carrying out task(s) independently. All modules in range operate from a project model that is accessed directly by other modules for analysis and performance prediction and allows design changes to flow through automatically to facilitate smooth communication and coordination between different design activities. All the modules have a number of common design features. The quality of an implementation of each offshore code in independent software modules is measured by defining the level of inter-dependability among modules and their interaction among them, and by defining the degree of intra-dependability within elements of a module. This modularization technique also includes other benefits, such as ease of maintenance and updates. The improvements are related to the objectives of a state-of-the-art procedure of performing engineering, design and analysis by use of offshore codes implemented in a software application. The application is developed in .NET C# language with MS Visual Studio Technology that provides a powerful graphical user interface well integrated in windows environment.


2008 ◽  
Vol 05 (02) ◽  
pp. 273-287
Author(s):  
LI CHEN ◽  
HIROSHI OKUDA

This paper describes a parallel visualization library for large-scale datasets developed in the HPC-MW project. Three parallel frameworks are provided in the library to satisfy different requirements of applications. Meanwhile, it is applicable for a variety of mesh types covering particles, structured grids and unstructured grids. Many techniques have been employed to improve the quality of the visualization. High speedup performance has been achieved by some hardware-oriented optimization strategies on different platforms, from PC clusters to the Earth Simulator. Good results have been obtained on some typical parallel platforms, thus demonstrating the feasibility and effectiveness of our library.


2017 ◽  
Vol 27 (2) ◽  
pp. 183-195
Author(s):  
Wojciech Bożejko ◽  
Zenon Chaczko ◽  
Mariusz Uchroński ◽  
Mieczysław Wodecki

Abstract The subject of this work is the new idea of blocks for the cyclic flow shop problem with setup times, using multiple patterns with different sizes determined for each machine constituting optimal schedule of cities for the traveling salesman problem (TSP). We propose to take advantage of the Intel Xeon Phi parallel computing environment during so-called ’blocks’ determination basing on patterns, in effect significantly improving the quality of obtained results.


2022 ◽  
Vol 9 ◽  
Author(s):  
Ekta Sonwani ◽  
Urvashi Bansal ◽  
Roobaea Alroobaea ◽  
Abdullah M. Baqasah ◽  
Mustapha Hedabou

Aiming to increase the shelf life of food, researchers are moving toward new methodologies to maintain the quality of food as food grains are susceptible to spoilage due to precipitation, humidity, temperature, and a variety of other influences. As a result, efficient food spoilage tracking schemes are required to sustain food quality levels. We have designed a prototype to track food quality and to manage storage systems at home. Initially, we have employed a Convolutional Neural Network (CNN) model to detect the type of fruit and veggies. Then the proposed system monitors the gas emission level, humidity level, and temperature of fruits and veggies by using sensors and actuators to check the food spoilage level. This would additionally control the environment and avoid food spoilage wherever possible. Additionally, the food spoilage level is informed to the customer by an alert message sent to their registered mobile numbers based on the freshness and condition of the food. The model employed proved to have an accuracy rate of 95%. Finally, the experiment is successful in increasing the shelf life of some categories of food by 2 days.


Author(s):  
Isabel Durán-Muñoz ◽  
María Rosario Bautista-Zambrana

This paper aims at discussing the main advantages that ontologies bring to the field of terminology and its users, focusing on different aspects and needs. Throughout the paper ontologies are acknowledged as a valuable resource to improve quality of terminological projects as well as the content of terminologies, but it also seems appropriate to define the concept of ontologies more precisely and to outline their benefits and limitations. To do so, we firstly discuss the multidisciplinarity of ontologies and the main recent uses within different disciplines. Secondly, we focus on terminology studies and theories and depict the evolution of this resource in the terminology field during the last decades, which has brought about the appearance of new methodologies and applications. Next, we put forward the advantages that ontologies bring to terminology in general and to several linguistic phenomena in particular (multidimensionality, for example) so as to shed some light on their importance in this field and, finally, we conclude with the discussion of significant drawbacks encountered, along with some final remarks about the use of ontologies in terminology work.


Author(s):  
Tiansi Dong ◽  
Zhigang Wang ◽  
Juanzi Li ◽  
Christian Bauckhage ◽  
Armin B. Cremers

A Triple in knowledge-graph takes a form that consists of head, relation, tail. Triple Classification is used to determine the truth value of an unknown Triple. This is a hard task for 1-to-N relations using the vector-based embedding approach. We propose a new region-based embedding approach using fine-grained type chains. A novel geometric process is presented to extend the vectors of pre-trained entities into n-balls (n-dimensional balls) under the condition that head balls shall contain their tail balls. Our algorithm achieves zero energy cost, therefore, serves as a case study of perfectly imposing tree structures into vector space. An unknown Triple (h,r,x) will be predicted as true, when x’s n-ball is located in the r-subspace of h’s n-ball, following the same construction of known tails of h. The experiments are based on large datasets derived from the benchmark datasets WN11, FB13, and WN18. Our results show that the performance of the new method is related to the length of the type chain and the quality of pre-trained entityembeddings, and that performances of long chains with welltrained entity-embeddings outperform other methods in the literature. Source codes and datasets are located at https: //github.com/GnodIsNait/mushroom.


2015 ◽  
Vol 16 (3) ◽  
pp. 303-306 ◽  
Author(s):  
Adeleke Victor Adedayo

Purpose – The purpose of this paper is to suggest that citations made in the introduction and literature review sections of academic writings should not count in the analysis of citations to measure the quality of research papers. Design/methodology/approach – Elucidatory expositions are made on the purposes of the introduction and literature review sections. Findings – The nature of citations to knowledge to establish these purposes is identified and used to suggest that citations made in these sections should not count in citation analysis that are used to determine quality of publications. Introduction sections are written to identify the importance and justification for the subject of study, while literature reviews are written to identify gaps, opposing views, strengths and weaknesses in the status quo knowledge. Originality/value – This paper will provide insight and awareness to new methodologies to cull and curate appropriate citation count in the computation of quality of publications.


Sign in / Sign up

Export Citation Format

Share Document