A Note on Strong Riesz Summability

1982 ◽  
Vol 25 (3) ◽  
pp. 263-272
Author(s):  
B. Thorpe

AbstractThis note proves that if 1 ≤ p < ∞ and 1 − 1/p < k < 2 − 1/p then the space of sequences strongly Riesz summable [R, λ, k]p to 0 has AK. Using general results of Jakimovski and Russell it is then possible to deduce a best possible limitation condition and a convergence factor result for [R, λ, k]p.

2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Adisorn Kittisopaporn ◽  
Pattrawut Chansangiam ◽  
Wicharn Lewkeeratiyutkul

AbstractWe derive an iterative procedure for solving a generalized Sylvester matrix equation $AXB+CXD = E$ A X B + C X D = E , where $A,B,C,D,E$ A , B , C , D , E are conforming rectangular matrices. Our algorithm is based on gradients and hierarchical identification principle. We convert the matrix iteration process to a first-order linear difference vector equation with matrix coefficient. The Banach contraction principle reveals that the sequence of approximated solutions converges to the exact solution for any initial matrix if and only if the convergence factor belongs to an open interval. The contraction principle also gives the convergence rate and the error analysis, governed by the spectral radius of the associated iteration matrix. We obtain the fastest convergence factor so that the spectral radius of the iteration matrix is minimized. In particular, we obtain iterative algorithms for the matrix equation $AXB=C$ A X B = C , the Sylvester equation, and the Kalman–Yakubovich equation. We give numerical experiments of the proposed algorithm to illustrate its applicability, effectiveness, and efficiency.


Filomat ◽  
2012 ◽  
Vol 26 (3) ◽  
pp. 607-613 ◽  
Author(s):  
Xiang Wang ◽  
Dan Liao

A hierarchical gradient based iterative algorithm of [L. Xie et al., Computers and Mathematics with Applications 58 (2009) 1441-1448] has been presented for finding the numerical solution for general linear matrix equations, and the convergent factor has been discussed by numerical experiments. However, they pointed out that how to choose a best convergence factor is still a project to be studied. In this paper, we discussed the optimal convergent factor for the gradient based iterative algorithm and obtained the optimal convergent factor. Moreover, the theoretical results of this paper can be extended to other methods of gradient-type based. Results of numerical experiments are consistent with the theoretical findings.


2002 ◽  
Vol 33 (2) ◽  
pp. 161-166
Author(s):  
Y. Okuyama

In this paper, we shall prove a general theorem which contains two theorems on the absolute N"orlund summability and the absolute Riesz summability of orthogonal series.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Rao M. C. Karthik ◽  
Rashmi L. Malghan ◽  
Fuat Kara ◽  
Arunkumar Shettigar ◽  
Shrikantha S. Rao ◽  
...  

The paper aims to investigate the processing execution of SS316 in manageable machining cooling ways such as dry, wet, and cryogenic (LN2-liquid nitrogen). Furthermore, “one parametric approach” was utilized to study the influence and carry out the comparative analysis of LN2over dry and LN2over wet machining conditions. Response surface methodology (RSM) is incorporated to build a relationship model among the considered independent variables (spindle speed: (S, rpm), feed rate (F, mm/min), and depth of cut (doc) (D, mm)) and the dependent variable (surface roughness (Ra)). Since there is the involvement of more than one independent variable, the generation of regression equation is “multiple linear regression.” Based on the attained coefficient value of the independent variable, the respective impact on surface roughness is identified. The results of comparative analysis of LN2over dry and LN2over wet machining states revealed that LN2 machining yielded better surface finish with up to 64.9%, 54.9% over dry and wet machining, respectively, indicating the benefits of LN2 for achieving better Ra. The benchmark function of the proposed mode hybrid-bias (BNN-SVR) algorithm showcases the propensity to emerge out of the local minimum and coincide with the optimal target value. The performance of the (BNN-SVR) is a prevalent new ability to fetch the partially trained weights from the BNN model into the SVR model, thus leading to the conversion of static learning capability to dynamic capability. The performances of the adopted prediction approaches are compared through a range of attained error deviation, i.e., (RA: 3.95%–8.43%), (BNN: 2.36%–5.88%), (SVR: 1.04%–3.61%), respectively. Hybrid-bias (BNN-SVR) is the best suitable prediction model as it provides significant evidence by attaining less error in predicting Ra. However, SVR surpasses BNN and RSM approaches because of the convergence factor and narrow margin error.


2002 ◽  
Vol 54 (2) ◽  
pp. 303-323 ◽  
Author(s):  
Fereidoun Ghahramani ◽  
Sandy Grabiner

AbstractWe study convergence in weighted convolution algebras L1(ω) on R+, with the weights chosen such that the corresponding weighted space M(ω) of measures is also a Banach algebra and is the dual space of a natural related space of continuous functions. We determine convergence factor ɳ for which weak*-convergence of {λn} to λ in M(ω) implies norm convergence of λn * f to λ * f in L1(ωɳ). We find necessary and sufficent conditions which depend on ω and f and also find necessary and sufficent conditions for ɳ to be a convergence factor for all L1(ω) and all f in L1(ω). We also give some applications to the structure of weighted convolution algebras. As a preliminary result we observe that ɳ is a convergence factor for ω and f if and only if convolution by f is a compact operator from M(ω) (or L1(ω)) to L1(ωɳ).


Sign in / Sign up

Export Citation Format

Share Document