scholarly journals Influence of gravity effect to the recovery rate at uranium in-situ leaching

Author(s):  
M. B. Kurmanseiit ◽  
◽  
M. S. Tungatarova ◽  
K. A. Alibayeva ◽  
◽  
...  

In-Situ Leaching is a method of extracting minerals by selectively dissolving it with a leaching solution directly in the place of occurrence of the mineral. In practice, during the development of deposits with the In-Situ Leaching method, situations arise when the solution tends to go down below the active thickness of the stratum. This may be due to geological heterogeneity of the rock or gravitational sedimentation of the solution in the rock due to the difference in the densities of the solution and groundwater. As a result of the deposition of the solution along the height, there is a decrease in the recovery of the metal located in the upper part of the geological layers. This article examines the effect of gravity on the flow regime during the filtration of the solution in the rock. The influence of the gravitational effect on the flow of solution in the rock is studied for different ratios of the densities of the solution and groundwater without taking into account the interaction of the solution with the rock. The CUDA technology is used to improve the performance of calculations. The results show that the use of CUDA technology allows to increase the performance of calculations by 40-80 times compared to calculations on a central processing unit (CPU) for different computational grids.

Author(s):  
A. V. Nikitina ◽  
A. E. Chistyakov ◽  
A. M. Atayan

The purpose of this work is to create a software package for a distributed solution of the problem of transporting a pollutant in a reservoir with complex bathymetry and the presence of technological structures. An algorithm has been developed for the parallel solution of the problem of transporting a pollutant (pollutant) in a reservoir on a graphics accelerator controlled by the CUDA (Compute Unified Device Architecture) system; a comparative analysis of the operation of algorithms on a CPU (Central Processing Unit) and on a graphics accelerator GPU (Graphics Processing Unit) made it possible to evaluate their performance. The software implementation of the modules included in the complex is described, the main classes and implemented methods are documented. The results of numerical experiments showed that solving of pollutant transport’s problem based on the CUDA technology is ineffective for small grids (up to 100 ´ 100 computational nodes). In the case of large grids (1000 ´ 1000 computational nodes), the use of CUDA technology reduces the computation time by an order of magnitude. An analysis of the experiments carried out with the developed components of software showed that the maximum value of the ratio of the algorithm operating time that implements the set task of transferring matter in a shallow water on a GPU to the operating time of a similar algorithm on the CPU was 24.92 times, which is achieved on a grid of 1000 ´ 1000 computational nodes. Implementation of methods for decomposition of grid regions is proposed for solving computationally laborious problems of diffusion-convection, including the problem of transporting pollutants in a reservoir with complex bathymetry with technological objects that take into account the architecture and parameters of a MSC (Multiprocessor Computing System) located on the basis of the infrastructure facility of the STU (Scientific and Technological University) “Sirius” (Sochi, Russia). Consideration was made for such a property of a computing system as the time it takes to transmit and receive floating point data. An algorithm for the parallel solution of the task under the control of MPI (Message Passing Interface) technology has been developed, and its efficiency has been assessed. The acceleration values of the proposed algorithm are obtained depending on the number of involved computers (processors) and the size of the computational grid. The maximum number of computers used is 24, the maximum size of the computational grid was 10 000 ´ 10 000 computational nodes. The developed algorithm showed low efficiency for small computational grids (up to 100 ´ 100 computational nodes). In the case of large computational grids ( from 1000  1000 computational nodes), the use of MPI reduces the computation time by several times.


2016 ◽  
Vol 6 (1) ◽  
pp. 79-90
Author(s):  
Łukasz Syrocki ◽  
Grzegorz Pestka

AbstractThe ready to use set of functions to facilitate solving a generalized eigenvalue problem for symmetric matrices in order to efficiently calculate eigenvalues and eigenvectors, using Compute Unified Device Architecture (CUDA) technology from NVIDIA, is provided. An integral part of the CUDA is the high level programming environment enabling tracking both code executed on Central Processing Unit and on Graphics Processing Unit. The presented matrix structures allow for the analysis of the advantages of using graphics processors in such calculations.


2014 ◽  
Vol 6 (2) ◽  
pp. 129-133
Author(s):  
Evaldas Borcovas ◽  
Gintautas Daunys

Image processing, computer vision or other complicated opticalinformation processing algorithms require large resources. It isoften desired to execute algorithms in real time. It is hard tofulfill such requirements with single CPU processor. NVidiaproposed CUDA technology enables programmer to use theGPU resources in the computer. Current research was madewith Intel Pentium Dual-Core T4500 2.3 GHz processor with4 GB RAM DDR3 (CPU I), NVidia GeForce GT320M CUDAcompliable graphics card (GPU I) and Intel Core I5-2500K3.3 GHz processor with 4 GB RAM DDR3 (CPU II), NVidiaGeForce GTX 560 CUDA compatible graphic card (GPU II).Additional libraries as OpenCV 2.1 and OpenCV 2.4.0 CUDAcompliable were used for the testing. Main test were made withstandard function MatchTemplate from the OpenCV libraries.The algorithm uses a main image and a template. An influenceof these factors was tested. Main image and template have beenresized and the algorithm computing time and performancein Gtpix/s have been measured. According to the informationobtained from the research GPU computing using the hardwarementioned earlier is till 24 times faster when it is processing abig amount of information. When the images are small the performanceof CPU and GPU are not significantly different. Thechoice of the template size makes influence on calculating withCPU. Difference in the computing time between the GPUs canbe explained by the number of cores which they have. Vaizdų apdorojimas, kompiuterinė rega ir kiti sudėtingi algoritmai, apdorojantys optinę informaciją, naudoja dideliusskaičiavimo išteklius. Dažnai šiuos algoritmus reikia realizuoti realiuoju laiku. Šį uždavinį išspręsti naudojant tik vienoCPU (angl. Central processing unit) pajėgumus yra sudėtinga. nVidia pasiūlyta CUDA (angl. Compute unified device architecture)technologija leidžia panaudoti GPU (angl. Graphic processing unit) išteklius. Tyrimui atlikti buvo pasirinkti du skirtingiCPU: Intel Pentium Dual-Core T4500 ir Intel Core I5 2500K, bei GPU: nVidia GeForce GT320M ir NVidia GeForce 560.Tyrime buvo panaudotos vaizdų apdorojimo bibliotekos: OpenCV 2.1 ir OpenCV 2.4. Tyrimui buvo pasirinktas šablonų atitiktiesalgoritmas. Algoritmui realizuoti reikalingas analizuojamas vaizdas ir ieškomo objekto vaizdo šablonas. Tyrimo metu buvokeičiamas vaizdo ir šablono dydis bei stebima, kaip tai veikia algoritmo vykdymo trukmę ir vykdomų operacijų skaičių persekundę. Iš gautų rezultatų galima teigti, kad apdorojant didelį duomenų kiekį GPU realizuoja algoritmą iki 24 kartų greičiaunei tik CPU. Dirbant su nedideliu duomenų kiekiu, skirtumas tarp CPU ir GPU yra minimalus. Lyginant skaičiavimus dviejuoseGPU, pastebėta, kad skaičiavimų sparta yra tiesiogiai proporcinga GPU turimų branduolių kiekiui. Mūsų tyrimo atvejuspartesniame GPU jų buvo 16 kartų daugiau, tad ir skaičiavimai vyko 16 kartų sparčiau.


Author(s):  
Hala Khankhour ◽  
Otman Abdoun ◽  
Jâafar Abouchabaka

<span>This article presents a new approach of integrating parallelism into the genetic algorithm (GA), to solve the problem of routing in a large ad hoc network, the goal is to find the shortest path routing. Firstly, we fix the source and destination, and we use the variable-length chromosomes (routes) and their genes (nodes), in our work we have answered the following question: what is the better solution to find the shortest path: the sequential or parallel method?. All modern systems support simultaneous processes and threads, processes are instances of programs that generally run independently, for example, if you start a program, the operating system spawns a new process that runs parallel elements to other programs, within these processes, we can use threads to execute code simultaneously. Therefore, we can make the most of the available central processing unit (CPU) cores. Furthermore, the obtained results showed that our algorithm gives a much better quality of solutions. Thereafter, we propose an example of a network with 40 nodes, to study the difference between the sequential and parallel methods, then we increased the number of sensors to 100 nodes, to solve the problem of the shortest path in a large ad hoc network.</span>


2013 ◽  
Vol 1 ◽  
pp. 151-163 ◽  
Author(s):  
Nikolay A. Simakov ◽  
Maria G. Kurnikova

AbstractPoisson and Poisson-Boltzmann equations (PE and PBE) are widely used in molecular modeling to estimate the electrostatic contribution to the free energy of a system. In such applications, PE often needs to be solved multiple times for a large number of system configurations. This can rapidly become a highly demanding computational task. To accelerate such calculations we implemented a graphical processing unit (GPU) PE solver described in this work. The GPU solver performance is compared to that of our central processing unit (CPU) implementation of the solver. During the performance analysis the following three characteristics were studied: (1) precision associated with the modeled system discretization on the grid, (2) numeric precision associated with the floating point representation of real numbers (this is done via comparison of calculations with single precision (SP) and double precision (DP)), and (3) execution time. Two types of example calculations were carried out to evaluate the solver performance: (1) solvation energy of a single ion and a small protein (lysozyme), and (2) a single ion potential in a large ion-channel (α-hemolysin). In addition, influence of various boundary condition (BC) choices was analyzed, to determine the most appropriate BC for the systems that include a membrane, typically represented by a slab with the dielectric constant of low value. The implemented GPU PE solver is overall about 7 times faster than the CPU-based version (including all four cores). Therefore, a single computer equipped with multiple GPUs can offer a computational power comparable to that of a small cluster. Our calculations showed that DP versions of CPU and GPU solvers provide nearly identical results. SP versions of the solvers have very similar behavior: in the grid scale range of 1-4 grids/Å the difference between SP and DP versions is less than the difference stemming from the system discretization. We found that for the membrane protein, the use of a focusing technique with periodic boundary conditions in rough grid provides significantly better results than using a focusing technique with the electric potential set to zero at the boundaries.


2015 ◽  
Vol 2015 ◽  
pp. 1-13 ◽  
Author(s):  
Marwan Abdellah ◽  
Ayman Eldeib ◽  
Amr Sharawi

Fourier volume rendering (FVR) is a significant visualization technique that has been used widely in digital radiography. As a result of itsO(N2log⁡N)time complexity, it provides a faster alternative to spatial domain volume rendering algorithms that areO(N3)computationally complex. Relying on theFourier projection-slice theorem, this technique operates on the spectral representation of a 3D volume instead of processing its spatial representation to generate attenuation-only projections that look likeX-ray radiographs. Due to the rapid evolution of its underlying architecture, the graphics processing unit (GPU) became an attractive competent platform that can deliver giant computational raw power compared to the central processing unit (CPU) on a per-dollar-basis. The introduction of the compute unified device architecture (CUDA) technology enables embarrassingly-parallel algorithms to run efficiently on CUDA-capable GPU architectures. In this work, a high performance GPU-accelerated implementation of the FVR pipeline on CUDA-enabled GPUs is presented. This proposed implementation can achieve a speed-up of 117x compared to a single-threaded hybrid implementation that uses the CPU and GPU together by taking advantage of executing the rendering pipeline entirely on recent GPU architectures.


Author(s):  
S. B. Aliev ◽  
◽  
Ye.U. Omarbekov ◽  
◽  

This paper analyses the experience uranium deposits mine development under conditions of highpressure nature of groundwater proposed technology "pumping wells" and upgrading technological scheme unit receiving and distribution of the solution. The results of experimental study of the use of "pumping wells" in mining deposits of uranium by in-situ leaching mine "Karatau". It is proved that by using the proposed technology and circuits under conditions of the high groundwater pressure reduces the cost of procurement of cables, significantly reduced the cost of acquisition of submersible pumps, savings in the end cap. In practice, one processing unit is equipped with one unit for receiving and distributing the solution, therefore, a leaching solution with the same acidity is supplied to all injection wells. To avoid such cases requires selective supply of different concentrations of acid with the different indicators pH. The modernization of the scheme of the unit for receiving and distributing the solution was carried out by connecting two bypass lines, where one bypass line is designed to transfer the injection wells to the pumping one, and the second one is to transfer the pumping wells to the pumping one. By connecting the two bypass lines, it will be possible to supply a leach solution with a higher acid concentration, selectively to any injection well. As a result, acid consumption will decrease due to its selective supply and pH values in wells will be balanced.


2020 ◽  
Author(s):  
Roudati jannah

Perangkat keras komputer adalah bagian dari sistem komputer sebagai perangkat yang dapat diraba, dilihat secara fisik, dan bertindak untuk menjalankan instruksi dari perangkat lunak (software). Perangkat keras komputer juga disebut dengan hardware. Hardware berperan secara menyeluruh terhadap kinerja suatu sistem komputer. Prinsipnya sistem komputer selalu memiliki perangkat keras masukan (input/input device system) – perangkat keras premprosesan (processing/central processing unit) – perangkat keras luaran (output/output device system) – perangkat tambahan yang sifatnya opsional (peripheral) dan tempat penyimpanan data (storage device system/external memory).


2020 ◽  
Author(s):  
Ika Milia wahyunu Siregar

Perkembangan IT di dunia sangat pesat, mulai dari perkembangan sofware hingga hardware. Teknologi sekarang telah mendominasi sebagian besar di permukaan bumi ini. Karena semakin cepatnya perkembangan Teknologi, kita sebagai pengguna bisa ketinggalan informasi mengenai teknologi baru apabila kita tidak up to date dalam pengetahuan teknologi ini. Hal itu dapat membuat kita mudah tergiur dan tertipu dengan berbagai iklan teknologi tanpa memikirkan sisi negatifnya. Sebagai pengguna dari komputer, kita sebaiknya tahu seputar mengenai komponen-komponen komputer. Komputer adalah serangkaian mesin elektronik yang terdiri dari jutaan komponen yang dapat saling bekerja sama, serta membentuk sebuah sistem kerja yang rapi dan teliti. Sistem ini kemudian digunakan untuk dapat melaksanakan pekerjaan secara otomatis, berdasarkan instruksi (program) yang diberikan kepadanya. Istilah Hardware komputer atau perangkat keras komputer, merupakan benda yang secara fisik dapat dipegang, dipindahkan dan dilihat. Central Processing System/ Central Processing Unit (CPU) adalah salah satu jenis perangkat keras yang berfungsi sebagai tempat untuk pengolahan data atau juga dapat dikatakan sebagai otak dari segala aktivitas pengolahan seperti penghitungan, pengurutan, pencarian, penulisan, pembacaan dan sebagainya.


Sign in / Sign up

Export Citation Format

Share Document