rank vector
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 5)

H-INDEX

7
(FIVE YEARS 0)

2021 ◽  
Vol 27 (3) ◽  
Author(s):  
Soheyla Feyzbakhsh ◽  
Chunyi Li

AbstractLet (X, H) be a polarized K3 surface with $$\mathrm {Pic}(X) = \mathbb {Z}H$$ Pic ( X ) = Z H , and let $$C\in |H|$$ C ∈ | H | be a smooth curve of genus g. We give an upper bound on the dimension of global sections of a semistable vector bundle on C. This allows us to compute the higher rank Clifford indices of C with high genus. In particular, when $$g\ge r^2\ge 4$$ g ≥ r 2 ≥ 4 , the rank r Clifford index of C can be computed by the restriction of Lazarsfeld–Mukai bundles on X corresponding to line bundles on the curve C. This is a generalization of the result by Green and Lazarsfeld for curves on K3 surfaces to higher rank vector bundles. We also apply the same method to the projective plane and show that the rank r Clifford index of a degree $$d(\ge 5)$$ d ( ≥ 5 ) smooth plane curve is $$d-4$$ d - 4 , which is the same as the Clifford index of the curve.


Information ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 147
Author(s):  
Sameh K. Mohamed ◽  
Emir Muñoz ◽  
Vit Novacek

Knowledge graph embedding (KGE) models have become popular means for making discoveries in knowledge graphs (e.g., RDF graphs) in an efficient and scalable manner. The key to success of these models is their ability to learn low-rank vector representations for knowledge graph entities and relations. Despite the rapid development of KGE models, state-of-the-art approaches have mostly focused on new ways to represent embeddings interaction functions (i.e., scoring functions). In this paper, we argue that the choice of other training components such as the loss function, hyperparameters and negative sampling strategies can also have substantial impact on the model efficiency. This area has been rather neglected by previous works so far and our contribution is towards closing this gap by a thorough analysis of possible choices of training loss functions, hyperparameters and negative sampling techniques. We finally investigate the effects of specific choices on the scalability and accuracy of knowledge graph embedding models.


2021 ◽  
Vol 54 (7) ◽  
pp. 631-636
Author(s):  
Giorgio Picci ◽  
Wenqi Cao ◽  
Anders Lindquist
Keyword(s):  
Low Rank ◽  

2020 ◽  
Author(s):  
Xiao Lai ◽  
Pu Tian

AbstractSupervised machine learning, especially deep learning based on a wide variety of neural network architectures, have contributed tremendously to fields such as marketing, computer vision and natural language processing. However, development of un-supervised machine learning algorithms has been a bottleneck of artificial intelligence. Clustering is a fundamental unsupervised task in many different subjects. Unfortunately, no present algorithm is satisfactory for clustering of high dimensional data with strong nonlinear correlations. In this work, we propose a simple and highly efficient hierarchical clustering algorithm based on encoding by composition rank vectors and tree structure, and demonstrate its utility with clustering of protein structural domains. No record comparison, which is an expensive and essential common step to all present clustering algorithms, is involved. Consequently, it achieves linear time and space computational complexity hierarchical clustering, thus applicable to arbitrarily large datasets. The key factor in this algorithm is definition of composition, which is dependent upon physical nature of target data and therefore need to be constructed case by case. Nonetheless, the algorithm is general and applicable to any high dimensional data with strong nonlinear correlations. We hope this algorithm to inspire a rich research field of encoding based clustering well beyond composition rank vector trees.


This paper derives a new frame work for the classification of natural textures based on gradient rank vectors derived on a 2 X 2 grid. This paper identified the ambiguity in deriving ranks when two or more positions of the grid possess the same value. To attend this ambiguity and without increasing the total number of rank vectors on d positions this paper derived a rule based rank vector frame work. This paper replaced the 2 X 2 grid with the column position of the Rule based Rank Word Matrix (RRWM). The range of column positions will be d! for d positions. This paper then divides RRW texture image, into a 3 X 3 grid and derives cross and diagonal rule based rank words. From this, the present paper derived Rule based Rank Word-Cross and Diagonal Texture Matrix (RRW-CDTM) and derives GLCM features for effective texture classification. The experimental results on various texture databases revels the classification accuracy of the proposed method. The proposed method is compared with the state of art local based approaches.


Author(s):  
Oscar Karnalim

Based on the fact that bytecode always exists on Java archive, a bytecode based Java archive search engine had been developed [1, 2]. Although this system is quite effective, it still lack of scalability since many modules apply recursive calls and this system only utilizes one core (single thread). In this research, Java archive search engine architecture is redesigned in order to improve its scalability. All recursion are converted to iterative forms although most of these modules are logically recursive and quite difficult to convert (e.g. Tarjan’s strongly connected component algorithm). Recursion conversion can be conducted by following its respective recursive pattern. Each recursion is broke down to four parts (before and after actions of current and its children) and converted to iteration with the help of caller reference. This conversion mechanism improves scalability by avoiding stack overflow error caused by method calls. System scalability is also improved by applying multithreading mechanism which successfully cut off its processing time. Shorter processing time may enable system to handle larger data. Multithreading is applied on major parts which are indexer, vector space model (VSM) retriever, low-rank vector space model (LRVSM) retriever, and semantic relatedness calculator (semantic relatedness calculator also involves multiprocess). The correctness of both recursion conversion and multithread design are proved by the fact that all implementation yield similar result.


Author(s):  
N V Ganapathi Raju ◽  
◽  
V Vijay Kumar ◽  
O Srinivasa Rao

2015 ◽  
Vol 2 (1) ◽  
Author(s):  
Svetlana Ermakova

AbstractIn this article we establish an analogue of the Barth-Van de Ven-Tyurin-Sato theorem.We prove that a finite rank vector bundle on a complete intersection of finite codimension in a linear ind-Grassmannian is isomorphic to a direct sum of line bundles.


Sign in / Sign up

Export Citation Format

Share Document