GPU ACCELERATED COMPUTING IN RADAR INFORMATION PROCESSING

Author(s):  
Linh

Position synchronization on maps is a critical problem in displaying radar information. Previously, this process was done in a sequential manner in the central processing (CPU). This often causes some bottlenecks leading to a long processing time because of the limited CPU architecture. To mitigate this problem, we proposed an algorithm of parallel processing to take advantage of the multi-core architecture of GPU to quickly reproject coordinates in radar information processing, improving the information display performance. Practical tests were done on different data sets to demonstrate the performance of this method, and the result showed that the improvement speeds are between 10 to 100 times better in each test.

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Hossein Ahmadvand ◽  
Fouzhan Foroutan ◽  
Mahmood Fathy

AbstractData variety is one of the most important features of Big Data. Data variety is the result of aggregating data from multiple sources and uneven distribution of data. This feature of Big Data causes high variation in the consumption of processing resources such as CPU consumption. This issue has been overlooked in previous works. To overcome the mentioned problem, in the present work, we used Dynamic Voltage and Frequency Scaling (DVFS) to reduce the energy consumption of computation. To this goal, we consider two types of deadlines as our constraint. Before applying the DVFS technique to computer nodes, we estimate the processing time and the frequency needed to meet the deadline. In the evaluation phase, we have used a set of data sets and applications. The experimental results show that our proposed approach surpasses the other scenarios in processing real datasets. Based on the experimental results in this paper, DV-DVFS can achieve up to 15% improvement in energy consumption.


2020 ◽  
Vol 30 (3) ◽  
pp. 99-111
Author(s):  
D. A. Palguyev ◽  
A. N. Shentyabin

In the processing of dynamically changing data, for example, radar data (RD), a crucial part is made by the representation of various data sets containing information about routes and signs of air objects. In the practical implementation of the computational process, it previously seemed natural that RD processing in data arrays was carried out by the elementwise search method. However, the representation of data arrays in the form of matrices and the use of matrix math allow optimal calculations to be formed during tertiary processing. Forming matrices and working with them requires a significant computational resource, so the authors can assume that a certain gain in calculation time may be achieved if there is a large amount of data in the arrays, at least several thousand messages. The article shows the sequences of the most frequently repeated operations of tertiary network processing, such as searching for and replacing an array element. The simulation results show that the processing efficiency (relative reduction of processing time and saving of computing resources) with the use of matrices, in comparison with elementwise search and replacement, increases in proportion to the number of messages received by the information processing device. The most significant gain is observed when processing several thousand messages (array elements). Thus, the use of matrices and the mathematical apparatus of matrix math for processing arrays of dynamically changing data can reduce processing time and save computational resources. The proposed matrix method of organizing calculations can also find its place in the modeling of complex information systems.


1993 ◽  
Vol 25 (1) ◽  
pp. 176-202 ◽  
Author(s):  
Nicholas Bambos ◽  
Jean Walrand

In this paper we study the following general class of concurrent processing systems. There are several different classes of processors (servers) and many identical processors within each class. There is also a continuous random flow of jobs, arriving for processing at the system. Each job needs to engage concurrently several processors from various classes in order to be processed. After acquiring the needed processors the job begins to be executed. Processing is done non-preemptively, lasts for a random amount of time, and then all the processors are released simultaneously. Each job is specified by its arrival time, its processing time, and the list of processors that it needs to access simultaneously. The random flow (sequence) of jobs has a stationary and ergodic structure. There are several possible policies for scheduling the jobs on the processors for execution; it is up to the system designer to choose the scheduling policy to achieve certain objectives.We focus on the effect that the choice of scheduling policy has on the asymptotic behavior of the system at large times and especially on its stability, under general stationary and ergocic input flows.


2021 ◽  
Author(s):  
Hongjie Zheng ◽  
Hanyu Chang ◽  
Yongqiang Yuan ◽  
Qingyun Wang ◽  
Yuhao Li ◽  
...  

<p>Global navigation satellite systems (GNSS) have been playing an indispensable role in providing positioning, navigation and timing (PNT) services to global users. Over the past few years, GNSS have been rapidly developed with abundant networks, modern constellations, and multi-frequency observations. To take full advantages of multi-constellation and multi-frequency GNSS, several new mathematic models have been developed such as multi-frequency ambiguity resolution (AR) and the uncombined data processing with raw observations. In addition, new GNSS products including the uncalibrated phase delay (UPD), the observable signal bias (OSB), and the integer recovery clock (IRC) have been generated and provided by analysis centers to support advanced GNSS applications.</p><p>       However, the increasing number of GNSS observations raises a great challenge to the fast generation of multi-constellation and multi-frequency products. In this study, we proposed an efficient solution to realize the fast updating of multi-GNSS real-time products by making full use of the advanced computing techniques. Firstly, instead of the traditional vector operations, the “level-3 operations” (matrix by matrix) of Basic Liner Algebra Subprograms (BLAS) is used as much as possible in the Least Square (LSQ) processing, which can improve the efficiency due to the central processing unit (CPU) optimization and faster memory data transmission. Furthermore, most steps of multi-GNSS data processing are transformed from serial mode to parallel mode to take advantage of the multi-core CPU architecture and graphics processing unit (GPU) computing resources. Moreover, we choose the OpenBLAS library for matrix computation as it has good performances in parallel environment.</p><p>       The proposed method is then validated on a 3.30 GHz AMD CPU with 6 cores. The result demonstrates that the proposed method can substantially improve the processing efficiency for multi-GNSS product generation. For the precise orbit determination (POD) solution with 150 ground stations and 128 satellites (GPS/BDS/Galileo/GLONASS/QZSS) in ionosphere-free (IF) mode, the processing time can be shortened from 50 to 10 minutes, which can guarantee the hourly updating of multi-GNSS ultra-rapid orbit products. The processing time of uncombined POD can also be reduced by about 80%. Meanwhile, the multi-GNSS real-time clock products can be easily generated in 5 seconds or even higher sampling rate. In addition, the processing efficiency of UPD and OSB products can also be increased by 4-6 times.</p>


Author(s):  
E.V. Egorova ◽  
A.N. Rybakov ◽  
M.H. Aksyaitov

Conducted studies of the phased implementation of neural network technologies in the practice of processing radar information, providing for a gradual increase in the level of neural network methods in processing systems, have shown that the use of neural network technologies can improve the quality of radar information processing in the most difficult conditions that require high computing power, when the dynamics of changes in external conditions is very is high and traditional approaches to the creation of processing systems are not able to provide the required level of efficiency. The need to develop theoretical provisions for neural network processing of radar information was revealed, while the main features of information processing in radars determine the relevance of research devoted to preventing the reduction in the quality of radar images in conditions of a large number of targets and a complex «jamming» environment based on the rational use of neural network technology. Analysis of the phased implementation of neural network technologies in radar information processing systems, as well as the use of neural network technology for processing radar information in terms of search and research, makes it possible to increase the efficiency of neural network methods for all processing tasks. Assessment of the required performance of computational tools allows us to single out the main neural network paradigms, the use of which gives a tangible increase in the efficiency of radar information processing, such as multilayer perceptron, Hopfield associative memory and self-organizing Kohonen network, while it is possible to rank the proposed methods in accordance with the required performance, undemanding to computing power and implemented on existing or promising computing facilities with software implementation of neural network paradigms. The analysis of possible directions for improving the quality of radar information processing does not claim to fully cover the entire multifaceted area of such studies. In this paper, only the most universal and widespread neural network paradigms are considered and the main part of possible areas of their application is analyzed. However, the proposed options show that the use of neural network technologies in critical tasks will improve the efficiency of radar information processing for complex, rapidly changing external conditions. The use of the principles of self-learning and the developed apparatus for the synthesis of neural network methods will reduce the duration and complexity of theoretical research, the conduct of which is a necessary and mandatory part of the traditional approach. In the course of further research, some of the proposed methods can be refined, as well as the emergence of new methods that make it possible to more fully use the advantages of neural network technology. Carrying out further research work in these areas will give a powerful stimulating impetus for the creation in the future of highly efficient methods for processing radar information, which can be implemented on the available element base.


Author(s):  
Suguru N. Kudoh

A neurorobot is a model system for biological information processing with vital components and the artificial peripheral system. As a central processing unit of the neurorobot, a dissociated culture system possesses a simple and functional network comparing to a whole brain; thus, it is suitable for exploration of spatiotemporal dynamics of electrical activity of a neuronal circuit. The behavior of the neurorobot is determined by the response pattern of neuronal electrical activity evoked by a current stimulation from outer world. “Certain premise rules” should be embedded in the relationship between spatiotemporal activity of neurons and intended behavior. As a strategy for embedding premise rules, two ideas are proposed. The first is “shaping,” by which a neuronal circuit is trained to deliver a desired output. Shaping strategy presumes that meaningful behavior requires manipulation of the living neuronal network. The second strategy is “coordinating.” A living neuronal circuit is regarded as the central processing unit of the neurorobot. Instinctive behavior is provided as premise control rules, which are embedded into the relationship between the living neuronal network and robot. The direction of self-tuning process of neurons is not always suitable for desired behavior of the neurorobot, so the interface between neurons and robot should be designed so as to make the direction of self-tuning process of the neuronal network correspond with desired behavior of the robot. Details of these strategies and concrete designs of the interface between neurons and robot are be introduced and discussed in this chapter.


Author(s):  
Cepi Ramdani ◽  
Indah Soesanti ◽  
Sunu Wibirama

Fuzzy C Means algorithm or FCM is one of many clustering algorithms that has better accuracy to solve problems related to segmentation. Its application is almost in every aspects of life and many disciplines of science. However, this algorithm has some shortcomings, one of them is the large amount of processing time consumption. This research conducted mainly to do an analysis about the effect of segmentation parameters towards processing time in sequential and parallel. The other goal is to reduce the processing time of segmentation process using parallel approach. Parallel processing applied on Nvidia GeForce GT540M GPU using CUDA v8.0 framework. The experiment conducted on natural RGB color image sized 256x256 and 512x512. The settings of segmentation parameter values were done as follows, weight in range (2-3), number of iteration (50-150), number of cluster (2-8), and error tolerance or epsilon (0.1 – 1e-06). The results obtained by this research as follows, parallel processing time is faster 4.5 times than sequential time with similarity level of image segmentations generated both of processing types is 100%. The influence of segmentation parameter values towards processing times in sequential and parallel can be concluded as follows, the greater value of weight parameter then the sequential processing time becomes short, however it has no effects on parallel processing time. For iteration and cluster parameters, the greater their values will make processing time consuming in sequential and parallel become large. Meanwhile the epsilon parameter has no effect or has an unpredictable tendency on both of processing time.


Sign in / Sign up

Export Citation Format

Share Document