scholarly journals Matrix application for multi-radar processing of radar data arrays

2020 ◽  
Vol 30 (3) ◽  
pp. 99-111
Author(s):  
D. A. Palguyev ◽  
A. N. Shentyabin

In the processing of dynamically changing data, for example, radar data (RD), a crucial part is made by the representation of various data sets containing information about routes and signs of air objects. In the practical implementation of the computational process, it previously seemed natural that RD processing in data arrays was carried out by the elementwise search method. However, the representation of data arrays in the form of matrices and the use of matrix math allow optimal calculations to be formed during tertiary processing. Forming matrices and working with them requires a significant computational resource, so the authors can assume that a certain gain in calculation time may be achieved if there is a large amount of data in the arrays, at least several thousand messages. The article shows the sequences of the most frequently repeated operations of tertiary network processing, such as searching for and replacing an array element. The simulation results show that the processing efficiency (relative reduction of processing time and saving of computing resources) with the use of matrices, in comparison with elementwise search and replacement, increases in proportion to the number of messages received by the information processing device. The most significant gain is observed when processing several thousand messages (array elements). Thus, the use of matrices and the mathematical apparatus of matrix math for processing arrays of dynamically changing data can reduce processing time and save computational resources. The proposed matrix method of organizing calculations can also find its place in the modeling of complex information systems.

2021 ◽  
pp. 001112872110077
Author(s):  
Lin Liu ◽  
R.R. Dunlea ◽  
Besiki Luka Kutateladze

The literature on sentencing has devoted ample consideration to how prosecutors and judges incorporate priorities such as retribution and public safety into their decision making, typically using legal and extralegal characteristics as analytic proxies. In contrast, the role of case processing efficiency in determining punishment outcomes has garnered little attention. Using recent data from a large Florida jurisdiction, we examine the influence of case screening and disposition timeliness on sentence outcomes in felony cases. We find that lengthier case processing time is highly and positively associated with punitive outcomes at sentencing. The more time prosecutors spend on a case post-filing, the more likely defendants are to receive custodial sentences and longer sentences. Case screening time, although not affecting the imposition of custodial sentences, is also positively associated with sentence length. These findings are discussed through the lens of instrumental and expressive functions of punishment.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 621
Author(s):  
Giuseppe Psaila ◽  
Paolo Fosci

Internet technology and mobile technology have enabled producing and diffusing massive data sets concerning almost every aspect of day-by-day life. Remarkable examples are social media and apps for volunteered information production, as well as Open Data portals on which public administrations publish authoritative and (often) geo-referenced data sets. In this context, JSON has become the most popular standard for representing and exchanging possibly geo-referenced data sets over the Internet.Analysts, wishing to manage, integrate and cross-analyze such data sets, need a framework that allows them to access possibly remote storage systems for JSON data sets, to retrieve and query data sets by means of a unique query language (independent of the specific storage technology), by exploiting possibly-remote computational resources (such as cloud servers), comfortably working on their PC in their office, more or less unaware of real location of resources. In this paper, we present the current state of the J-CO Framework, a platform-independent and analyst-oriented software framework to manipulate and cross-analyze possibly geo-tagged JSON data sets. The paper presents the general approach behind the J-CO Framework, by illustrating the query language by means of a simple, yet non-trivial, example of geographical cross-analysis. The paper also presents the novel features introduced by the re-engineered version of the execution engine and the most recent components, i.e., the storage service for large single JSON documents and the user interface that allows analysts to comfortably share data sets and computational resources with other analysts possibly working in different places of the Earth globe. Finally, the paper reports the results of an experimental campaign, which show that the execution engine actually performs in a more than satisfactory way, proving that our framework can be actually used by analysts to process JSON data sets.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Hossein Ahmadvand ◽  
Fouzhan Foroutan ◽  
Mahmood Fathy

AbstractData variety is one of the most important features of Big Data. Data variety is the result of aggregating data from multiple sources and uneven distribution of data. This feature of Big Data causes high variation in the consumption of processing resources such as CPU consumption. This issue has been overlooked in previous works. To overcome the mentioned problem, in the present work, we used Dynamic Voltage and Frequency Scaling (DVFS) to reduce the energy consumption of computation. To this goal, we consider two types of deadlines as our constraint. Before applying the DVFS technique to computer nodes, we estimate the processing time and the frequency needed to meet the deadline. In the evaluation phase, we have used a set of data sets and applications. The experimental results show that our proposed approach surpasses the other scenarios in processing real datasets. Based on the experimental results in this paper, DV-DVFS can achieve up to 15% improvement in energy consumption.


Author(s):  
Danlei Xu ◽  
Lan Du ◽  
Hongwei Liu ◽  
Penghui Wang

A Bayesian classifier for sparsity-promoting feature selection is developed in this paper, where a set of nonlinear mappings for the original data is performed as a pre-processing step. The linear classification model with such mappings from the original input space to a nonlinear transformation space can not only construct the nonlinear classification boundary, but also realize the feature selection for the original data. A zero-mean Gaussian prior with Gamma precision and a finite approximation of Beta process prior are used to promote sparsity in the utilization of features and nonlinear mappings in our model, respectively. We derive the Variational Bayesian (VB) inference algorithm for the proposed linear classifier. Experimental results based on the synthetic data set, measured radar data set, high-dimensional gene expression data set, and several benchmark data sets demonstrate the aggressive and robust feature selection capability and comparable classification accuracy of our method comparing with some other existing classifiers.


2021 ◽  
Author(s):  
Hongjie Zheng ◽  
Hanyu Chang ◽  
Yongqiang Yuan ◽  
Qingyun Wang ◽  
Yuhao Li ◽  
...  

<p>Global navigation satellite systems (GNSS) have been playing an indispensable role in providing positioning, navigation and timing (PNT) services to global users. Over the past few years, GNSS have been rapidly developed with abundant networks, modern constellations, and multi-frequency observations. To take full advantages of multi-constellation and multi-frequency GNSS, several new mathematic models have been developed such as multi-frequency ambiguity resolution (AR) and the uncombined data processing with raw observations. In addition, new GNSS products including the uncalibrated phase delay (UPD), the observable signal bias (OSB), and the integer recovery clock (IRC) have been generated and provided by analysis centers to support advanced GNSS applications.</p><p>       However, the increasing number of GNSS observations raises a great challenge to the fast generation of multi-constellation and multi-frequency products. In this study, we proposed an efficient solution to realize the fast updating of multi-GNSS real-time products by making full use of the advanced computing techniques. Firstly, instead of the traditional vector operations, the “level-3 operations” (matrix by matrix) of Basic Liner Algebra Subprograms (BLAS) is used as much as possible in the Least Square (LSQ) processing, which can improve the efficiency due to the central processing unit (CPU) optimization and faster memory data transmission. Furthermore, most steps of multi-GNSS data processing are transformed from serial mode to parallel mode to take advantage of the multi-core CPU architecture and graphics processing unit (GPU) computing resources. Moreover, we choose the OpenBLAS library for matrix computation as it has good performances in parallel environment.</p><p>       The proposed method is then validated on a 3.30 GHz AMD CPU with 6 cores. The result demonstrates that the proposed method can substantially improve the processing efficiency for multi-GNSS product generation. For the precise orbit determination (POD) solution with 150 ground stations and 128 satellites (GPS/BDS/Galileo/GLONASS/QZSS) in ionosphere-free (IF) mode, the processing time can be shortened from 50 to 10 minutes, which can guarantee the hourly updating of multi-GNSS ultra-rapid orbit products. The processing time of uncombined POD can also be reduced by about 80%. Meanwhile, the multi-GNSS real-time clock products can be easily generated in 5 seconds or even higher sampling rate. In addition, the processing efficiency of UPD and OSB products can also be increased by 4-6 times.</p>


2012 ◽  
pp. 862-880
Author(s):  
Russ Miller ◽  
Charles Weeks

Grids represent an emerging technology that allows geographically- and organizationally-distributed resources (e.g., computer systems, data repositories, sensors, imaging systems, and so forth) to be linked in a fashion that is transparent to the user. The New York State Grid (NYS Grid) is an integrated computational and data grid that provides access to a wide variety of resources to users from around the world. NYS Grid can be accessed via a Web portal, where the users have access to their data sets and applications, but do not need to be made aware of the details of the data storage or computational devices that are specifically employed in solving their problems. Grid-enabled versions of the SnB and BnP programs, which implement the Shake-and-Bake method of molecular structure (SnB) and substructure (BnP) determination, respectively, have been deployed on NYS Grid. Further, through the Grid Portal, SnB has been run simultaneously on all computational resources on NYS Grid as well as on more than 1100 of the over 3000 processors available through the Open Science Grid.


2020 ◽  
Author(s):  
Mariëlle Mulder ◽  
Delia Arnold ◽  
Christian Maurer ◽  
Marcus Hirtl

<p>An operational framework is developed to provide timely and frequent source term updates for volcanic emissions (ash and SO<sub>2</sub>). The procedure includes running the Lagrangian particle dispersion model FLEXPART with an initial (a priori) source term, and combining the output with observations (from satellite, ground-based, etc. sources) to obtain an a posteriori source term. This work was part of the EUNADICS-AV (eunadics-av.eu), which is a continuation of the work developed in the VAST project (vast.nilu.no). The aim is to ensuring that at certain time intervals when new observational and meteorological data is available during an event, an updated source term is provided to analysis and forecasting groups. The system is tested with the Grimsvötn eruption of 2011. Based on a source term sensitivity test, one can find the optimum between a sufficiently detailed source term and computational resources. Because satellite and radar data from different sources is available at different times, the source term is generated with the data that is available the earliest after the eruption started and data that is available later is used for evaluation.</p>


Author(s):  
Alan Gelfand ◽  
Sujit K. Sahu

This article discusses the use of Bayesian analysis and methods to analyse the demography of plant populations, and more specifically to estimate the demographic rates of trees and how they respond to environmental variation. It examines data from individual (tree) measurements over an eighteen-year period, including diameter, crown area, maturation status, and survival, and from seed traps, which provide indirect information on fecundity. The multiple data sets are synthesized with a process model where each individual is represented by a multivariate state-space submodel for both continuous (fecundity potential, growth rate, mortality risk, maturation probability) and discrete states (maturation status). The results from plant population demography analysis demonstrate the utility of hierarchical modelling as a mechanism for the synthesis of complex information and interactions.


Author(s):  
Hong Shen ◽  
Yutao Zheng ◽  
Han Wang ◽  
Zhenqiang Yao

Inverse problem in laser forming involves the heating position planning and the determination of heating parameters. In this study, the heating positions are optimized in laser forming of single curved shapes based on the processing efficiency. The algorithm uses a probability function to initialize the heating position that is considered to be the bending points. The optimization process is to minimize the total processing time through adjusting the heating positions by considering the boundary conditions of the offset distances, the minimum bending angle, and the minimum distance between two adjacent heating positions. The optimized results are compared with those obtained by the distance-based model as well as the experimental data.


2008 ◽  
Vol 22 (09n11) ◽  
pp. 1833-1838 ◽  
Author(s):  
DONG SOO KIM ◽  
SUNG WOO BAE ◽  
KYUNG HYUN CHOI

A Solid Freeform Fabrication (SFF) system using Selective Laser Sintering (SLS) is currently recognized as a leading process and SLS extends the applications to machinery and automobiles due to the various materials employed. Especially, accuracy and processing time are very important factors when the desired shape is fabricated with Selective Laser Sintering (SLS), one of Solid Freeform Fabrication (SFF) system. In the convectional SLS process, laser spot size is fixed during laser exposing on the sliced figure. Therefore, it is difficult to accuracy and rapidly fabricates the desired shape. In this paper, to deal with those problems a SFF system having ability of changing spot size is developed. The system provides high accuracy and optimal processing time. Specifically, a variable beam expander is employed to adjust spot size for different figures on a sliced shape. Therefore, design and performance estimation of the SFF system employing a variable beam expander are achieved and the mechanism will be addressed to measure the real spot size generated from the variable beam expander. Also, the reduction of total processing time is an important issue in SFF system. A digital mirror system (DMS) is a system which scans the laser beam with different spot size. The spot size is selected based on the slicing section to decrease and accuracy of the process time and improve the processing efficiency. In this study, the optimal scan path generation for DMS will be addressed, and this development will improve the whole processing efficiency and accuracy through the scan efficiency by considering the existing scan path algorithm and heat energy distribution.


Sign in / Sign up

Export Citation Format

Share Document