scholarly journals Efficient computation of tridiagonal matrices largest eigenvalue

2018 ◽  
Vol 330 ◽  
pp. 268-275 ◽  
Author(s):  
Diego F.G. Coelho ◽  
Vassil S. Dimitrov ◽  
L. Rakai
2014 ◽  
Vol 2 (1) ◽  
Author(s):  
Pentti Haukkanen ◽  
Mika Mattila ◽  
Jorma K. Merikoski ◽  
Alexander Kovacec

AbstractDefine n × n tridiagonal matrices T and S as follows: All entries of the main diagonal of T are zero and those of the first super- and subdiagonal are one. The entries of the main diagonal of S are two except the (n, n) entry one, and those of the first super- and subdiagonal are minus one. Then, denoting by λ(·) the largest eigenvalue,Using certain lower bounds for the largest eigenvalue, we provide lower bounds for these expressions and, further, lower bounds for sin x and cos x on certain intervals. Also upper bounds can be obtained in this way.


10.1558/37291 ◽  
2018 ◽  
Vol 2 (2) ◽  
pp. 242-263
Author(s):  
Stefano Rastelli ◽  
Kook-Hee Gil

This paper offers a new insight into GenSLA classroom research in light of recent developments in the Minimalist Program (MP). Recent research in GenSLA has shown how generative linguistics and acquisition studies can inform the language classroom, mostly focusing on what linguistic aspects of target properties should be integrated as a part of the classroom input. Based on insights from Chomsky’s ‘three factors for language design’ – which bring together the Faculty of Language, input and general principles of economy and efficient computation (the third factor effect) for language development – we put forward a theoretical rationale for how classroom research can offer a unique environment to test the learnability in L2 through the statistical enhancement of the input to which learners are exposed.


2016 ◽  
Vol 63 (4) ◽  
pp. 1-60 ◽  
Author(s):  
Fedor V. Fomin ◽  
Daniel Lokshtanov ◽  
Fahad Panolan ◽  
Saket Saurabh

Author(s):  
Jürgen Jost ◽  
Raffaella Mulas ◽  
Florentin Münch

AbstractWe offer a new method for proving that the maxima eigenvalue of the normalized graph Laplacian of a graph with n vertices is at least $$\frac{n+1}{n-1}$$ n + 1 n - 1 provided the graph is not complete and that equality is attained if and only if the complement graph is a single edge or a complete bipartite graph with both parts of size $$\frac{n-1}{2}$$ n - 1 2 . With the same method, we also prove a new lower bound to the largest eigenvalue in terms of the minimum vertex degree, provided this is at most $$\frac{n-1}{2}$$ n - 1 2 .


2021 ◽  
Vol 7 (3) ◽  
pp. 41
Author(s):  
Emre Baspinar ◽  
Luca Calatroni ◽  
Valentina Franceschi ◽  
Dario Prandi

We consider Wilson-Cowan-type models for the mathematical description of orientation-dependent Poggendorff-like illusions. Our modelling improves two previously proposed cortical-inspired approaches, embedding the sub-Riemannian heat kernel into the neuronal interaction term, in agreement with the intrinsically anisotropic functional architecture of V1 based on both local and lateral connections. For the numerical realisation of both models, we consider standard gradient descent algorithms combined with Fourier-based approaches for the efficient computation of the sub-Laplacian evolution. Our numerical results show that the use of the sub-Riemannian kernel allows us to reproduce numerically visual misperceptions and inpainting-type biases in a stronger way in comparison with the previous approaches.


Sign in / Sign up

Export Citation Format

Share Document