multidimensional matrices
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 5)

H-INDEX

6
(FIVE YEARS 1)

Author(s):  
E.I. Goncharov ◽  
P.L. Iljin ◽  
V.I. Munerman ◽  
T.A. Samoylova

Most modern high-availability information systems are either based on systems that are classified as artificial intelligence, or to a large extent include them as components. The solution of many problems of artificial intelligence is based on the use of algorithms that implement the convolution operation (for example, algorithms for training neural networks). The article proposes an approach that, on the one hand, provides a strict formalization of this operation based on the algebra of multidimensional matrices, and on the other hand, due to this, it provides such practical advantages as simplifying and reducing the cost of developing such information systems, as well as reducing the execution time of queries to them. due to the simplicity of the development of parallel algorithms and programs and the efficient use of parallel computing systems. Convolution is an indispensable operation for solving many scientific and technical problems, such as machine learning, data analysis, signal processing, image processing filters. An important role is played by multidimensional convolutions, which are widely used in various subject areas. At the same time, due to the complexity of the algorithms that implement them, in practice even three-dimensional convolutions are used much less frequently than one- and two-dimensional ones. Replacing a multidimensional convolution operation with a sequence of convolution operations of lower dimensions significantly increases its computational complexity. The main reason for this lies in the absence of a unified strict definition of the operation and overload in mathematics of the term «convolution». Therefore, the article discusses a multidimensional-matrix computation model, which allows one to effectively formalize problems whose solution uses multidimensional convolution operations, and to implement an effective solution to these problems due to the natural parallelism inherent in the operations of the algebra of multidimensional matrices.


Author(s):  
B. Kazimi ◽  
F. Thiemann ◽  
M. Sester

<p><strong>Abstract.</strong> We explore the use of semantic segmentation in Digital Terrain Models (DTMS) for detecting manmade landscape structures in archaeological sites. DTM data are stored and processed as large matrices of depth 1 as opposed to depth 3 in RGB images. The matrices usually contain continuous real-valued information upper bound of which is not fixed, such as distance or height from a reference surface. This is different from RGB images that contain integer values in a fixed range of 0 to 255. Additionally, RGB images are usually stored in smaller multidimensional matrices, and are more suitable as inputs for a neural network while the large DTMs are necessary to be split into smaller sub-matrices to be used by neural networks. Thus, while the spatial information of pixels in RGB images are important only locally within a single image, for DTM data, they are important locally, within a single sub-matrix processed for neural network, and also globally, in relation to the neighboring sub-matrices. To cope with the two differences, we apply min-max normalization to each input matrix fed to the neural network, and use a slightly modified version of DeepLabv3+ model for semantic segmentation. We show that with the architecture change, and the preprocessing, better results are achieved.</p>


2018 ◽  
Author(s):  
Jesse Geneson

Keszegh (2009) proved that the extremal function $ex(n, P)$ of any forbidden light $2$-dimensional 0-1 matrix $P$ is at most quasilinear in $n$, using a reduction to generalized Davenport-Schinzel sequences. We extend this result to multidimensional matrices by proving that any light $d$-dimensional 0-1 matrix $P$ has extremal function $ex(n, P,d) = O(n^{d-1}2^{\alpha(n)^{t}})$ for some constant $t$ that depends on $P$. To prove this result, we introduce a new family of patterns called $(P, s)$-formations, which are a generalization of $(r, s)$-formations, and we prove upper bounds on their extremal functions. In many cases, including permutation matrices $P$ with at least two ones, we are able to show that our $(P, s)$-formation upper bounds are tight.


2017 ◽  
Vol 340 (12) ◽  
pp. 2769-2781 ◽  
Author(s):  
Jesse T. Geneson ◽  
Peter M. Tian

2015 ◽  
Vol 8 (1) ◽  
pp. 13-19 ◽  
Author(s):  
Vadim Romanuke

Abstract The paper suggests a method of obtaining an approximate solution of the infinite noncooperative game on the unit hypercube. The method is based on sampling uniformly the players’ payoff functions with the constant step along each of the hypercube dimensions. The author states the conditions for a sufficiently accurate sampling and suggests the method of reshaping the multidimensional matrix of the player’s payoff values, being the former player’s payoff function before its sampling, into a matrix with minimally possible number of dimensions, where also maintenance of one-to-one indexing has been provided. Requirements for finite NE-strategy from NE (Nash equilibrium) solution of the finite game as the initial infinite game approximation are given as definitions of the approximate solution consistency. The approximate solution consistency ensures its relative independence upon the sampling step within its minimal neighborhood or the minimally decreased sampling step. The ultimate reshaping of multidimensional matrices of players’ payoff values to the minimal number of dimensions, being equal to the number of players, stimulates shortened computations.


Sign in / Sign up

Export Citation Format

Share Document