scholarly journals The Use of Surface Topography for the Identification of Discontinuous Displacements Due to Cracks

Metals ◽  
2020 ◽  
Vol 10 (8) ◽  
pp. 1037
Author(s):  
Fatih Uzun ◽  
Alexander M. Korsunsky

The determination of three components of displacements at material surfaces is possible using surface topography information of undeformed (reference) and deformed states. The height digital image correlation (hDIC) technique was developed and demonstrated to achieve micro-level in-plane resolution and nanoscale out-of-plane precision. However, in the original formulation hDIC and other topography-based correlation techniques perform well in the determination of continuous displacements. In the present study of material deformation up to cracking and filan failure, the ability to identify discontinuous triaxial displacements at emerging discontinuities is important. For this purpose, a new method reported herein was developed based on the hDIC technique. The hDIC solution procedure comprises two stages, namely, integer-pixel level correlation and sub-pixel level correlation. In order to predict the displacement and height changes in discontinuous regions, a smoothing stage was inserted between the two main stages. The proposed method determines accurately the discontinuous edges, and the out-of-plane displacements become sharply resolved without any further intervention in the algorithm function. High computational demand required to determine discontinuous displacements using high density topography data was tackled by employing the graphics processing unit (GPU) parallel computing capability with the paging approach. The hDIC technique with GPU parallel computing implementation was applied for the identification of discontinuous edges in an aluminium alloy dog bone test specimen subjected to tensile testing up to failure.

In this digital age circuit design, analysis and validation is not only fundamental step but quite crucial in all the industries and in research. Simulation software is available for circuit analysis, but they all prove to be slower for very large circuit simulation or to execute thousands of iteration of transient analysis. Accelerating simulator is as important as speeding up circuit design. In this paper we have addressed circuit analysis using parallel computing approach on Graphics Processing Unit (GPU). Now a day’s high end GPUs are available with sufficient memory in the architecture itself. Circuit processing functions are analysed to search compute intensive functions. Mathematical operations are redesigned so that it will execute in parallel. LU decomposition algorithm and complex math operations are converted in parallel form. Some mathematical operations are simplified to merge them in suitable cluster. Clustering approach is used which helps in finding kernel of uniform operations to map on GPU cores. GPU programming strategies like if-else in-lining, parallel reduction etc are useful in accelerating circuit operations. Use of look up tables in shared memory or constant memory proves to be useful in fast data access. At least 15% speed gain is achieved for operational analysis and 40% for transient analysis of regular circuits.


2012 ◽  
Vol 17 (3) ◽  
pp. 21-28 ◽  
Author(s):  
Beata Marciniak ◽  
Tomasz Marciniak ◽  
Zbigniew Lutowski ◽  
Sławomir Bujnowski

Abstract In this paper, the analysis of the possibilities of using Digital Image Correlation (DIC) based on Graphics Processing Unit (GPU) for strain analysis in fatigue cracking processes is presented. The basic assumption for the discussed displacement and strain measurement method under time variable loads was obtaining high measurement sensitivity by simultaneously minimizing the measurement time consumption. For this purpose special computing procedures based on multiprocessor graphic cards were developed, which significantly reduced the total time of displacement and strain analysis. The developed digital procedure for correlation of images has been used for an example of displacement analysis in the method of fatigue crack propagation testing in airplane riveted joints. In this paper are presented the results of the researches of the team run by professor Antoni Zabłudowski


Author(s):  
Szymon Grabia ◽  
Ula Smyczynska ◽  
Konrad Pagacz ◽  
Wojciech Fendler

AbstractMotivationMulti-gene expression assays are an attractive tool in revealing complex regulatory mechanisms in living organisms. Normalization is an indispensable step of data analysis in all those studies, since it removes unwanted, non-biological variability from data. In targeted qPCR assays the normalization is typically performed with respect to prespecified reference genes, but the lack of robust strategy of their selection is reported in literature, especially in studies concerning circulating microRNAs (miRNA).ResultsPrevious studies concluded that averaged expressions of multi-miRNA combinations are more stable references than single genes. However, due to the number of such combinations the computational load is considerable and may be hindering for objective reference selection in large datasets. Existing implementations of normalization algorithms (geNorm, NormFinder and BestKeeper) have poor performance as every combination is evaluated sequentially. Thus, we designed an integrative tool which implemented those methods in a parallel manner on a graphics processing unit (GPU) using CUDA platform. We tested our approach on publicly available microRNA expression datasets. As a result the times of executions decreased 19-, 105- and 77-fold respectively for geNorm, BestKeeper and NormFinder.AvailabilityNormiRazor is available as web application at norm.btm.umed.pl.ContactWojciech Fendler, [email protected].


2014 ◽  
Vol 136 (12) ◽  
Author(s):  
Rui Liu ◽  
Surya P. Vanka ◽  
Brian G. Thomas

In this paper, we study particle transport and deposition in a turbulent square duct flow with an imposed magnetic field using direct numerical simulations (DNS) of the continuous flow and Lagrangian tracking of particles. The magnetic field and the velocity induce a current and the interaction of this current with the magnetic field generates a Lorentz force that brakes the flow and modifies the flow structure. A second-order accurate finite volume method is used to integrate the coupled Navier–Stokes and magnetohydrodynamic (MHD) equations and the solution procedure is implemented on a graphics processing unit (GPU). Magnetically nonconducting particles of different Stokes numbers are continuously injected at random locations in the inlet cross section of the duct and their rates of deposition on the duct walls are studied with and without a magnetic field. Because of the modified instantaneous turbulent flow structures as a result of the magnetic field, the deposition rates and patterns on the walls perpendicular to the magnetic field are lower than those on the walls parallel to the magnetic field.


SPE Journal ◽  
2016 ◽  
Vol 21 (04) ◽  
pp. 1425-1435 ◽  
Author(s):  
Cheng Chen ◽  
Zheng Wang ◽  
Deepak Majeti ◽  
Nick Vrvilo ◽  
Timothy Warburton ◽  
...  

Summary Shale permeability is sufficiently low to require an unconventional scale of stimulation treatments, such as very-large-volume, high-rate, multistage hydraulic-fracturing applications. Upscaling of hydrocarbon transport processes in shales is challenging because of the low permeability and strong heterogeneity. Rock characterization with high-resolution imaging [X-ray tomography and scanning electron microscope (SEM)] is usually highly localized and contains significant uncertainties because of the small field of view. Therefore, an effective high-performance computing method is required to collect information over a larger scale to meet the ergodicity requirement in upscaling. The lattice Boltzmann (LB) method has received significant attention in computational fluid dynamics because of its capability in coping with complicated boundary conditions. A combination of high-resolution imaging and LB simulation is a powerful approach for evaluating the transport properties of a porous medium in a timely manner, on the basis of the numerical solution of the Navier-Stokes equations and Darcy's law. In this work, a graphics-processing-unit (GPU) -enhanced lattice Boltzmann simulator (GELBS) was developed, which was optimized by GPU parallel computing on the basis of the inherent parallelism of the LB method. Specifically, the LB method was used to implement the computational kernel; a sparse data structure was applied to optimize memory allocation; the OCCA (Medina et al. 2014) portability library was used, which enables the GELBS codes to use different application-programming interfaces (APIs) including open computing language (OpenCL), compute unified device architecture (CUDA), and open multiprocessing (OpenMP). OpenCL is an open standard for cross-platform parallel computing, CUDA is supported only by NVIDIA devices, and OpenMP is primarily used on central processing units (CPUs). It was found that the GPU-accelerated code was approximately 1,000 times faster than the unoptimized serial code and 10 times faster than the parallel code run on a standalone CPU. The CUDA code was slightly faster than OpenCL code on the NVIDA GPU because of the extra cost of OpenCL used to adapt to a heterogeneous platform. The GELBS was validated by comparing it with analytical solutions, laboratory measurements, and other independent numerical simulators in previous studies, and it was proved to have a second-order global accuracy. The GELBS was then used to analyze thin cuttings extracted from a sandstone reservoir and a shale-gas reservoir. The sandstone permeabilities were found relatively isotropic, whereas the shale permeabilities were strongly anisotropic because of the horizontal lamination structure. In shale cuttings, the average permeability in the horizontal direction was higher than that in the vertical direction by approximately two orders of magnitude. Correlations between porosity and permeability were observed in both rocks. The combination of GELBS and high-resolution imaging methods makes for a powerful tool for permeability evaluation when conventional laboratory measurement is impossible because of small cuttings sizes. The constitutive correlations between geometry and transport properties can be used for upscaling in different rock types. The GPU-optimized code significantly accelerates the computing speed; thus, many more samples can be analyzed given the same processing time. Consequently, the ergodicity requirement is met, which leads to a better reservoir characterization.


Sign in / Sign up

Export Citation Format

Share Document