Matrix Transformations

2021 ◽  
pp. 161-186
Author(s):  
Gokulananda Das ◽  
Sudarsan Nanda
Diagnostics ◽  
2021 ◽  
Vol 11 (5) ◽  
pp. 773
Author(s):  
Xiaojun Chen ◽  
Zhenqi Jiang ◽  
Xiao Han ◽  
Xiaolin Wang ◽  
Xiaoying Tang

Magnetic particle imaging (MPI) is a novel non-invasive molecular imaging technology that images the distribution of superparamagnetic iron oxide nanoparticles (SPIONs). It is not affected by imaging depth, with high sensitivity, high resolution, and no radiation. The MPI reconstruction with high precision and high quality is of enormous practical importance, and many studies have been conducted to improve the reconstruction accuracy and quality. MPI reconstruction based on the system matrix (SM) is an important part of MPI reconstruction. In this review, the principle of MPI, current construction methods of SM and the theory of SM-based MPI are discussed. For SM-based approaches, MPI reconstruction mainly has the following problems: the reconstruction problem is an inverse and ill-posed problem, the complex background signals seriously affect the reconstruction results, the field of view cannot cover the entire object, and the available 3D datasets are of relatively large volume. In this review, we compared and grouped different studies on the above issues, including SM-based MPI reconstruction based on the state-of-the-art Tikhonov regularization, SM-based MPI reconstruction based on the improved methods, SM-based MPI reconstruction methods to subtract the background signal, SM-based MPI reconstruction approaches to expand the spatial coverage, and matrix transformations to accelerate SM-based MPI reconstruction. In addition, the current phantoms and performance indicators used for SM-based reconstruction are listed. Finally, certain research suggestions for MPI reconstruction are proposed, expecting that this review will provide a certain reference for researchers in MPI reconstruction and will promote the future applications of MPI in clinical medicine.


Mathematics ◽  
2018 ◽  
Vol 6 (11) ◽  
pp. 268 ◽  
Author(s):  
Kuddusi Kayaduman ◽  
Fevzi Yaşar

In 1978, the domain of the Nörlund matrix on the classical sequence spaces lp and l∞ was introduced by Wang, where 1 ≤ p < ∞. Tuğ and Başar studied the matrix domain of Nörlund mean on the sequence spaces f0 and f in 2016. Additionally, Tuğ defined and investigated a new sequence space as the domain of the Nörlund matrix on the space of bounded variation sequences in 2017. In this article, we defined new space and and examined the domain of the Nörlund mean on the bs and cs, which are bounded and convergent series, respectively. We also examined their inclusion relations. We defined the norms over them and investigated whether these new spaces provide conditions of Banach space. Finally, we determined their α­, β­, γ­duals, and characterized their matrix transformations on this space and into this space.


1968 ◽  
Vol 20 ◽  
pp. 727-734 ◽  
Author(s):  
I. J. Maddox

Let X = (X, p) be a seminormed complex linear space with zero θ. Natural definitions of convergent sequence, Cauchy sequence, absolutely convergent series, etc., can be given in terms of the seminorm p. Let us write C = C(X) for the set of all convergent sequences for the set of Cauchy sequences; and L∞ for the set of all bounded sequences.


10.14311/1029 ◽  
2008 ◽  
Vol 48 (4) ◽  
Author(s):  
I. Šimeček

Sparse matrix-vector multiplication (shortly SpM×V) is one of most common subroutines in numerical linear algebra. The problem is that the memory access patterns during SpM×V are irregular, and utilization of the cache can suffer from low spatial or temporal locality. Approaches to improve the performance of SpM×V are based on matrix reordering and register blocking. These matrix transformations are designed to handle randomly occurring dense blocks in a sparse matrix. The efficiency of these transformations depends strongly on the presence of suitable blocks. The overhead of reorganization of a matrix from one format to another is often of the order of tens of executions ofSpM×V. For this reason, such a reorganization pays off only if the same matrix A is multiplied by multiple different vectors, e.g., in iterative linear solvers.This paper introduces an unusual approach to accelerate SpM×V. This approach can be combined with other acceleration approaches andconsists of three steps:1) dividing matrix A into non-empty regions,2) choosing an efficient way to traverse these regions (in other words, choosing an efficient ordering of partial multiplications),3) choosing the optimal type of storage for each region.All these three steps are tightly coupled. The first step divides the whole matrix into smaller parts (regions) that can fit in the cache. The second step improves the locality during multiplication due to better utilization of distant references. The last step maximizes the machine computation performance of the partial multiplication for each region.In this paper, we describe aspects of these 3 steps in more detail (including fast and time-inexpensive algorithms for all steps). Ourmeasurements prove that our approach gives a significant speedup for almost all matrices arising from various technical areas.


2019 ◽  
Vol 22 (2) ◽  
pp. 191-200
Author(s):  
Enno Kolk

Characterized are matrix transformations related to certain subsets of the space of ideal convergent sequences. Obtained here results are connected with the previous investigations of the author on some transformations defined by infinite matrices of bounded linear operators.


Sign in / Sign up

Export Citation Format

Share Document