scholarly journals Performance evaluation in the reconstruction of 2D images of computed tomography using massively parallel programming CUDA

Author(s):  
Alexssandro Ferreira Cordeiro ◽  
Pedro Luiz de Paula Filho ◽  
Hamilton Pereira Silva ◽  
Arnaldo Candido Junior ◽  
Edresson Casanova ◽  
...  

Abstract Purpose: analysis of processing time and similarity of images generated between CPU and GPU architectures and sequential and parallel programming methodologies. Material and methods: for image processing a computer with AMD FX-8350 processor and an Nvidia GTX 960 Maxwell GPU was used, along with the CUDAFY library and the programming language C# with the IDE Visual studio. Results: the results of the comparisons indicate that the form of sequential programming in a CPU generates reliable images at a high custom of time when compared to the forms of parallel programming in CPU and GPU. While parallel programming generates faster results, but with increased noise in the reconstructed image. For data types float a GPU obtained best result with average time equivalent to 1/3 of the processor, however the data is of type double the parallel CPU approach obtained the best performance. Conclusion: for the float data type, the GPU had the best average time performance, while for the double data type the best average time performance was for the parallel approach CPU. Regarding image quality, the sequential approach obtained similar outputs, while theparallel approaches generated noise in their outputs.

2018 ◽  
Vol 4 (1) ◽  
pp. 555-558 ◽  
Author(s):  
Fang Chen ◽  
Jan Müller ◽  
Jens Müller ◽  
Ronald Tetzlaff

AbstractIn this contribution we propose a feature-based method for motion estimation and correction in intraoperative thermal imaging during brain surgery. The motion is estimated from co-registered white-light images in order to perform a robust motion correction on the thermographic data. To ensure real-time performance of an intraoperative application, we optimise the processing time which essentially depends on the number of key points found by our algorithm. For this purpose we evaluate the effect of applying an non-maximum suppression (NMS) to improve the feature detection efficiency. Furthermore we propose an adaptive method to determine the size of the suppression area, resulting in a trade-off between accuracy and processing time.


Author(s):  
Tarik Chafiq ◽  
Mohammed Ouadoud ◽  
Hassane Jarar Oulidi ◽  
Ahmed Fekri

The aim of this research work is to ensure the integrity and correction of the geotechnical database which contains anomalies. These anomalies occurred mainly in the phase of inputting and/or transferring of data. The algorithm created in the framework of this paper was tested on a dataset of 70 core drillings. In fact, it is based on a multi-criteria analysis qualifying the geotechnical data integrity using the sequential approach. The implementation of this algorithm has given a relevant set of values in terms of output; which will minimalize processing time and manual verification. The application of the methodology used in this paper could be useful to define the type of foundation adapted to the nature of the subsoil, and thus, foresee the adequate budget.


2021 ◽  
Author(s):  
Behzad Pouladiborj ◽  
Olivier Bour ◽  
Niklas Linde ◽  
Laurent Longuevergne

<p>Hydraulic tomography is a state of the art method for inferring hydraulic conductivity fields using head data. Here, a numerical model is used to simulate a steady-state hydraulic tomography experiment by assuming a Gaussian hydraulic conductivity field (also constant storativity) and generating the head and flux data in different observation points. We employed geostatistical inversion using head and flux data individually and jointly to better understand the relative merits of each data type. For the typical case of a small number of observation points, we find that flux data provide a better resolved hydraulic conductivity field compared to head data when considering data with similar signal-to-noise ratios. In the case of a high number of observation points, we find the estimated fields to be of similar quality regardless of the data type. A resolution analysis for a small number of observations reveals that head data averages over a broader region than flux data, and flux data can better resolve the hydraulic conductivity field than head data. The inversions' performance depends on borehole boundary conditions, with the best performing setting for flux data and head data are constant head and constant rate, respectively. However, the joint inversion results of both data types are insensitive to the borehole boundary type. Considering the same number of observations, the joint inversion of head and flux data does not offer advantages over individual inversions. By increasing the hydraulic conductivity field variance, we find that the resulting increased non-linearity makes it more challenging to recover high-quality estimates of the reference hydraulic conductivity field. Our findings would be useful for future planning and design of hydraulic tomography tests comprising the flux and head data.</p>


2020 ◽  
pp. 165-188
Author(s):  
Sam Featherston

This chapter is a contribution to the ongoing debate about the necessary quality of the database for theory building in research on syntax. In particular, the focus is upon introspective judgments as a data type or group of data types. In the first part, the chapter lays out some of the evidence for the view that the judgments of a single person or of a small group of people are much less valid than the judgments of a group. In the second part, the chapter criticizes what the author takes to be overstatements and overgeneralizations of findings by Sprouse, Almeida, and Schütze that are sometimes viewed as vindicating an “armchair method” in linguistics. The final part of the chapter attempts to sketch out a productive route forward that empirically grounded syntax could take.


1993 ◽  
Vol 02 (01) ◽  
pp. 33-46 ◽  
Author(s):  
DANIEL P. MIRANKER ◽  
FREDERIC H. BURKE ◽  
JERI J. STEELE ◽  
JOHN KOLTS ◽  
DAVID R. HAUG

Most the execution environments, having been derived from LISP, inference on internally defined data-types and come packaged with stand-alone development environments. Data derived from outside these systems must be reformatted before it can be evaluated. This mismatch leads to a duplicate representation of data, which, in turn, introduces both performance and semantic problems. This paper describes a C++ Embeddable Rule System (CERS) which avoids this mismatch. CERS is a compiled, forward-chaining rule system that inferences directly on arbitrary C++ objects. CERS can be viewed as an extension of C++, where the methods associated with a ruleset class can be defined either procedurally or declaratively. CERS is unique in that rules may match against and manipulate arbitrary, user-defined C++ objects. There is no requirement that the developer anticipated using CERS when defining the class. Thus CERS rules can inference over data objects instantiated in persistent object stores and third-party. C++ abstract data-type libraries.


Sign in / Sign up

Export Citation Format

Share Document