scholarly journals Evaluation of the performance of algorithms for synthesizing radar images using Cuda technology

Doklady BGUIR ◽  
2021 ◽  
Vol 19 (6) ◽  
pp. 92-96
Author(s):  
S. V. Kozlov

The features of the implementation of the algorithm for the synthesis of detail radar images for an aperture synthesis radar using the built-in functions of the Cuda library are presented. The estimation of computational complexity from the standpoint of the organization of parallel computing on Nvidia GPUs is given. The estimation of the real performance of radar synthesis is given, taking into account the volume and features of the placement of primary radar information.

2013 ◽  
Vol 2013 ◽  
pp. 1-15 ◽  
Author(s):  
Carlos Couder-Castañeda ◽  
Carlos Ortiz-Alemán ◽  
Mauricio Gabriel Orozco-del-Castillo ◽  
Mauricio Nava-Flores

An implementation with the CUDA technology in a single and in several graphics processing units (GPUs) is presented for the calculation of the forward modeling of gravitational fields from a tridimensional volumetric ensemble composed by unitary prisms of constant density. We compared the performance results obtained with the GPUs against a previous version coded in OpenMP with MPI, and we analyzed the results on both platforms. Today, the use of GPUs represents a breakthrough in parallel computing, which has led to the development of several applications with various applications. Nevertheless, in some applications the decomposition of the tasks is not trivial, as can be appreciated in this paper. Unlike a trivial decomposition of the domain, we proposed to decompose the problem by sets of prisms and use different memory spaces per processing CUDA core, avoiding the performance decay as a result of the constant calls to kernels functions which would be needed in a parallelization by observations points. The design and implementation created are the main contributions of this work, because the parallelization scheme implemented is not trivial. The performance results obtained are comparable to those of a small processing cluster.


2015 ◽  
Vol 04 (04) ◽  
Author(s):  
Katerina Sheardova Jan Laczó ◽  
Martin Vyhnalek Ivana Mokrisova ◽  
Petr Telensky Ross Andel

Author(s):  
Ana Beatriz Albuquerque Bento ◽  
Fernando Da Silva Cardoso

Education is undoubtedly a factor that contributes decisively to human development. In this sense, the present study searches to evaluate, based on freirean assumptions, the contemporary scenario of education in Brazil and its reflexes in society. From a historical and structural analysis, the problems that are established as impasses to a contextualized, plural and accessible education are put in check, as we think new paths, from the epistemology of Paulo Freire, for the real performance of students in human rights and citizenship.


2020 ◽  
Vol 37 ◽  
pp. 101346
Author(s):  
Wei Zhou ◽  
Ruitao Gu ◽  
Shuai Lu
Keyword(s):  
The Real ◽  

2014 ◽  
Vol 2014 ◽  
pp. 1-7
Author(s):  
Radim Briš ◽  
Simona Domesová

Reliability engineering is relatively new scientific discipline which develops in close connection with computers. Rapid development of computer technology recently requires adequate novelties of source codes and appropriate software. New parallel computing technology based on HPC (high performance computing) for availability calculation will be demonstrated in this paper. The technology is particularly effective in context with simulation methods; nevertheless, analytical methods are taken into account as well. In general, basic algorithms for reliability calculations must be appropriately modified and improved to achieve better computation efficiency. Parallel processing is executed by two ways, firstly by the use of the MATLAB function parfor and secondly by the use of the CUDA technology. The computation efficiency was significantly improved which is clearly demonstrated in numerical experiments performed on selected testing examples as well as on industrial example. Scalability graphs are used to demonstrate reduction of computation time caused by parallel computing.


2021 ◽  
Vol 1203 (2) ◽  
pp. 022132
Author(s):  
Rostislav Doubek ◽  
Dita Hořínková ◽  
Martin Štěrba ◽  
Radka Kantová

Abstract The productivity of work performed at construction sites is primarily dependent on the effective deployment and use of construction machinery. Nevertheless, manufacturers do not state the actual performance of their machinery because it is difficult to determine due to its dependence on the specific conditions present at each construction site. One of the most important machines used in the construction of buildings is the tower crane, which provides secondary transport of material onsite. In order to evaluate the effectiveness of the use of such machines using a deterministic or stochastic approach, a relatively extensive and exact set of data describing the activities of a given tower crane needs to be prepared. These data describe the real requirements of ongoing construction sub-processes with regard to the utilisation of tower cranes. This contribution concerns the analysis of key construction sub-processes during the building of monolithic reinforced concrete structures in connection with secondary transport at the construction site; in particular, it describes the preparation and processing of this data for the evaluation of real time requirements placed on tower cranes.


Author(s):  
Fabrizio Coricelli ◽  
Marco Frigerio

We find that European SMEs significantly increased their net trade credit to sales ratio during the Great Recession. For the aggregate of SMEs, trade credit did not provide any buffer to the contraction in bank loans. In fact, through increased net trade credit, SMEs suffered a squeeze in their liquidity and this phenomenon reflects the weak bargaining power of SMEs in their trade credit relationship with larger firms. Therefore, increased net trade credit by SMEs does not reflect an efficient reallocation of credit, and it calls for policy actions. These policy actions are highly relevant, given that the liquidity squeeze had significant adverse effects on the real performance of SMEs, contributing to the recession and to the subsequent timid recovery of European economies. We explore various policies that could be implemented to relieve SMEs from the liquidity squeeze induced by the increase in their receivables.


2013 ◽  
Vol 380-384 ◽  
pp. 1571-1575
Author(s):  
Hong Chen ◽  
Hu Xing Zhou ◽  
Juan Meng

To solve the problem that the central guidance system takes too long time to calculate the shortest routes between all node pairs of network which can not meet the real-time demand of central guidance, this paper presents a central guidance parallel route optimization method based on parallel computing technique involving both route optimization time and travelers preferences by means of researching three parts: network data storage based on an array, multi-level network decomposition with travelers preferences considered and parallel shortest route computing of deque based on messages transfer. And based on the actual traffic network data of Guangzhou city, the suggested method is verified on three parallel computing platforms including ordinary PC cluster, Lenovo server cluster and HP workstations cluster. The results show that above three clusters finish the optimization of 21.4 million routes between 5631 nodes of Guangzhou city traffic network in 215, 189 and 177 seconds with the presented method respectively, which can completely meet the real-time demand of the central guidance.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 817
Author(s):  
Weibo Huo ◽  
Qiping Zhang ◽  
Yin Zhang ◽  
Yongchao Zhang ◽  
Yulin Huang ◽  
...  

The super-resolution method has been widely used for improving azimuth resolution for radar forward-looking imaging. Typically, it can be achieved by solving an undifferentiable L1 regularization problem. The split Bregman algorithm (SBA) is a great tool for solving this undifferentiable problem. However, its real-time imaging ability is limited to matrix inversion and iterations. Although previous studies have used the special structure of the coefficient matrix to reduce the computational complexity of each iteration, the real-time performance is still limited due to the need for hundreds of iterations. In this paper, a superfast SBA (SFSBA) is proposed to overcome this shortcoming. Firstly, the super-resolution problem is transmitted into an L1 regularization problem in the framework of regularization. Then, the proposed SFSBA is used to solve the nondifferentiable L1 regularization problem. Different from the traditional SBA, the proposed SFSBA utilizes the low displacement rank features of Toplitz matrix, along with the Gohberg-Semencul (GS) representation to realize fast inversion of the coefficient matrix, reducing the computational complexity of each iteration from O(N3) to O(N2). It uses a two-order vector extrapolation strategy to reduce the number of iterations. The convergence speed is increased by about 8 times. Finally, the simulation and real data processing results demonstrate that the proposed SFSBA can effectively improve the azimuth resolution of radar forward-looking imaging, and its performance is only slightly lower compared to traditional SBA. The hardware test shows that the computational efficiency of the proposed SFSBA is much higher than that of other traditional super-resolution methods, which would meet the real-time requirements in practice.


Sign in / Sign up

Export Citation Format

Share Document