programming effort
Recently Published Documents


TOTAL DOCUMENTS

79
(FIVE YEARS 16)

H-INDEX

10
(FIVE YEARS 2)

Author(s):  
Charalampos Marantos ◽  
Lazaros Papadopoulos ◽  
Angeliki-Agathi Tsintzira ◽  
Apostolos Ampatzoglou ◽  
Alexander Chatzigeorgiou ◽  
...  

2021 ◽  
Vol 55 (1) ◽  
pp. 38-46
Author(s):  
Yiqiu Wang ◽  
Shangdi Yu ◽  
Laxman Dhulipala ◽  
Yan Gu ◽  
Julian Shun

In many applications of graph processing, the input data is often generated from an underlying geometric point data set. However, existing high-performance graph processing frameworks assume that the input data is given as a graph. Therefore, to use these frameworks, the user must write or use external programs based on computational geometry algorithms to convert their point data set to a graph, which requires more programming effort and can also lead to performance degradation. In this paper, we present our ongoing work on the Geo- Graph framework for shared-memory multicore machines, which seamlessly supports routines for parallel geometric graph construction and parallel graph processing within the same environment. GeoGraph supports graph construction based on k-nearest neighbors, Delaunay triangulation, and b-skeleton graphs. It can then pass these generated graphs to over 25 graph algorithms. GeoGraph contains highperformance parallel primitives and algorithms implemented in C++, and includes a Python interface. We present four examples of using GeoGraph, and some experimental results showing good parallel speedups and improvements over the Higra library. We conclude with a vision of future directions for research in bridging graph and geometric data processing.


2020 ◽  
Author(s):  
Flávio T. Mariotto ◽  
Luis F. Ugarte ◽  
Eduardo Lacusta Jr ◽  
Madson C. de Almeida

The control and monitoring of public transport buses considering the Global Positioning System (GPS) produce a data tsunami for the creation of indicators related to public transportation. The conventional techniques of data analysis for this type of information require programming effort, execution of algorithms with high processing in extensive databases to get at the end the production of the idealized statistical and visualization reports. In this process, the search for new analyzes or visualizations may require a restart of the process, a control of the versioning of the developed programs and repetitive high processing algorithm. This form of acting hinder the cognitive process, the analysis capability and the inference of relevant information. In this context, this paper proposes a methodology based on Visual Analytics to infer passenger demand based on the trajectory of conventional buses for planning new routes served by electric buses at the State University of Campinas - UNICAMP. The methodology includes a space-time stage and another for human interactivity, with easily configurable graphics for evaluation of indicators. Results show the locations and times with the highest use of the transportation service and can be used to identifying new routes to be served by electric buses on campus.


2020 ◽  
Vol 4 (1) ◽  
pp. 58
Author(s):  
Elizabeth R. Click ◽  
Mary Ann Dobbins

Background: The impact of financial well-being on health is significant. Research connects individual and organizational well-being to the strength of financial knowledge and sound finance practices among employees. Learning more about financial topics is critical for comprehensive well-being yet few articles exist which describe implementation of such initiatives within organizations.Aim:  While traditional worksite wellness programs have emphasized physical activity, stress management and nutrition, an increasingly larger number of organizations want to expand beyond those categories to topics such as community health and financial well-being.Methods:  A Midwestern research-intensive university has offered financial well-being programs for faculty and staff over the last three years. A description of the initiative, and each programming effort, is included within this article.Results:  Numerous positive, qualitative outcomes have been experienced by program participants.Conclusions:  Learn more about the evidence and practical efforts easily implemented within higher education institutions through this description of one university’s program experience and outcomes.


2020 ◽  
Author(s):  
N. Jeremy Hill ◽  
Scott W. J. Mooney ◽  
Glen T. Prusky

In neuroscientific experiments and applications, working with auditory stimuli demands software tools for generation and acquisition of raw audio, for composition and tailoring of that material into finished stimuli, for precisely timed presentation of the stimuli, and for experimental session recording. Numerous programming tools exist to approach these tasks, but their differing specializations and conventions demand extra time and effort for integration. In particular, verifying stimulus timing requires extensive engineering effort when developing new applications.We present audiomath (https://pypi.org/project/audiomath ), a sound software library for Python that prioritizes the needs of neuroscientists. It minimizes programming effort by providing a simple object-oriented interface that unifies functionality for audio generation, manipulation, visualization, decoding, encoding, recording, and playback. It also incorporates specialized tools for measuring and optimizing stimulus timing.We provide an overview of the challenges and possible approaches to the problem of recording stimulus timing. We then report audio latency measurements across a range of hardware, operating systems and settings, to illustrate the ways in which hardware and software factors interact to affect stimulus presentation performance, and the resulting pitfalls for the programmer and experimenter. In particular, we highlight the potential conflict between demands for low latency, low variability in latency ("jitter"), cooperativeness and robustness. We report the ways in which audiomath can help to map this territory and provide a simplified path toward each application's particular priority.By unifying audio-related functionality and providing specialized diagnostic tools, audiomath both simplifies and potentiates the development of neuroscientific applications in Python.


2020 ◽  
Vol 110 (04) ◽  
pp. 264-269
Author(s):  
Hubert Würschinger ◽  
Matthias Mühlbauer ◽  
Nico Hanenkamp

In der industriellen Praxis wird eine Vielzahl von Prozess- und Qualitätskontrollaufgaben visuell von Mitarbeitern oder mithilfe von Kamerasystemen durchgeführt. Durch den Einsatz Künstlicher Intelligenz (KI) lässt sich die Programmierung und damit die Implementierung von Kamerasystemen effizienter gestalten. Im Bereich der Bildanalyse können dabei vortrainierte Künstliche Neuronale Netze verwendet werden. Das Anwenden dieser Netze auf neue Aufgaben wird dabei Transfer Learning genannt.   In industrial practice, a large number of process and quality control tasks are performed visually by employees or with the aid of camera systems. By using artificial intelligence, the programming effort and thus the implementation of camera systems can be made more efficient. Pre-trained ^neural networks can be used for image analysis. The application of these networks to new tasks is called transfer learning.


2019 ◽  
Author(s):  
Marcelo Pecenin ◽  
André Murbach Maidl ◽  
Daniel Weingaertner

Writing efficient image processing code is a very demanding task and much programming effort is put into porting existing code to new generations of hardware. Besides, the definition of what is an efficient code varies according to the desired optimization target, such as runtime, energy consumption or memory usage. We present a semi-automatic schedule generation system for the Halide DSL that uses a Reinforcement Learning agent to choose a set of scheduling options that optimizes the runtime of the resulting application. We compare our results to the state of the art implementations of three Halide pipelines and show that our agent is able to surpass hand-tuned code and Halide’s auto-scheduler on most scenarios for CPU and GPU architectures.


2019 ◽  
Vol 8 (9) ◽  
pp. 386 ◽  
Author(s):  
Natalija Stojanovic ◽  
Dragan Stojanovic

Watershed analysis, as a fundamental component of digital terrain analysis, is based on the Digital Elevation Model (DEM), which is a grid (raster) model of the Earth surface and topography. Watershed analysis consists of computationally and data intensive computing algorithms that need to be implemented by leveraging parallel and high-performance computing methods and techniques. In this paper, the Multiple Flow Direction (MFD) algorithm for watershed analysis is implemented and evaluated on multi-core Central Processing Units (CPU) and many-core Graphics Processing Units (GPU), which provides significant improvements in performance and energy usage. The implementation is based on NVIDIA CUDA (Compute Unified Device Architecture) implementation for GPU, as well as on OpenACC (Open ACCelerators), a parallel programming model, and a standard for parallel computing. Both phases of the MFD algorithm (i) iterative DEM preprocessing and (ii) iterative MFD algorithm, are parallelized and run over multi-core CPU and GPU. The evaluation of the proposed solutions is performed with respect to the execution time, energy consumption, and programming effort for algorithm parallelization for different sizes of input data. An experimental evaluation has shown not only the advantage of using OpenACC programming over CUDA programming in implementing the watershed analysis on a GPU in terms of performance, energy consumption, and programming effort, but also significant benefits in implementing it on the multi-core CPU.


2019 ◽  
Author(s):  
Maximilian Scheurer ◽  
Peter Reinholdt ◽  
Erik Kjellgren ◽  
Jógvan Magnus Haugaard Olsen ◽  
Andreas Dreuw ◽  
...  

We present a modular open-source library for polarizable embedding (PE) named CPPE. The library is implemented in C++, and it additionally provides a Python interface for rapid prototyping and experimentation in a high-level scripting language. Our library integrates seamlessly with existing quantum chemical program packages through an intuitive and minimal interface. Until now, CPPE has been interfaced to three packages, Q-Chem, Psi4, and PySCF. Furthermore, we show CPPE in action using all three program packages for a computational spectroscopy application. With CPPE, host program interfaces only require minor programming effort, paving the way for new combined methodologies and broader availability of the PE model.<br>


Sign in / Sign up

Export Citation Format

Share Document