symbol sequences
Recently Published Documents


TOTAL DOCUMENTS

56
(FIVE YEARS 8)

H-INDEX

11
(FIVE YEARS 1)

Author(s):  
Carlos Sarmiento ◽  
Jesus Savage

This paper presents a comparison between discrete Hidden Markov Models and Convolutional Neural Networks for the image classification task. By fragmenting an image into sections, it is feasible to obtain vectors that represent visual features locally, but if a spatial sequence is established in a fixed way, it is possible to represent an image as a sequence of vectors. Using clustering techniques, we obtain an alphabet from said vectors and then symbol sequences are constructed to obtain a statistical model that represents a class of images. Hidden Markov Models, combined with quantization methods, can treat noise and distortions in observations for computer vision problems such as the classification of images with lighting and perspective changes.We have tested architectures based on three, six and nine hidden states favoring the detection speed and low memory usage. Also, two types of ensemble models were tested. We evaluated the precision of the proposed methods using a public domain data set, obtaining competitive results with respect to fine-tuned Convolutional Neural Networks, but using significantly less computing resources. This is of interest in the development of mobile robots with computers with limited battery life, but requiring the ability to detect and add new objects to their classification systems.


Proceedings ◽  
2020 ◽  
Vol 47 (1) ◽  
pp. 26
Author(s):  
Rao Mikkilineni ◽  
Mark Burgin

Knowledge systems often have very sophisticated structures depicting cognitive andstructural entities. For instance, representation of knowledge in the form of a text involves thestructure of this text. This structure is represented by a hypertext, which is networks consisting oflinguistic objects, such as words, phrases and sentences, with diverse links connecting them.Current computational machines and automata such as Turing machines process information inthe form of symbol sequences. Here we discuss based the methods of structural machinesachieving higher flexibility and efficiency of information processing in comparison with regularmodels of computation. Being structurally universal abstract automata, structural machines allowworking directly with knowledge structures formed by knowledge objects and connectionsbetween them.


Proceedings ◽  
2020 ◽  
Vol 47 (1) ◽  
pp. 26
Author(s):  
Rao Mikkilineni ◽  
Mark Burgin

Knowledge systems often have very sophisticated structures depicting cognitive andstructural entities. For instance, representation of knowledge in the form of a text involves thestructure of this text. This structure is represented by a hypertext, which is networks consisting oflinguistic objects, such as words, phrases and sentences, with diverse links connecting them.Current computational machines and automata such as Turing machines process information inthe form of symbol sequences. Here we discuss based the methods of structural machinesachieving higher flexibility and efficiency of information processing in comparison with regularmodels of computation. Being structurally universal abstract automata, structural machines allowworking directly with knowledge structures formed by knowledge objects and connectionsbetween them.


Vestnik MEI ◽  
2020 ◽  
Vol 5 (5) ◽  
pp. 148-154
Author(s):  
Vadim N. Falk ◽  

So-called extra concepts introduced to represent structurally defined objects and structures with unlimited complexity in their traditional understanding are suggested. The concepts of extra-word, extra-regular expression, context-free extra-grammar, and context-free extra-language are extensions of the well-known concepts used in the theory of formal languages. Extra words are a special case of symbol sequences; however, the set of all extra words in any alphabet is countable, whereas the set of all symbol sequences is not countable. The periodic codes of rational number representations in some positional numeration system are in fact extra words in this terminology. The concept of an extra-tuple is a generalization of the tuple concept, which implies the possibility of interpreting extra-tuples both as finite and as the indicated type of infinite sequences of elements of an arbitrary, not more than countable set, and it should be noted that the set of all possible sequences of such sort remains countable. By using the introduced concepts, a countable family of the domains of truth values has been specified for multivalued and countable-valued logics, each of which is a bounded lattice of finite or countable power with the traditional definition of basic logical operations of negation, conjunction, and disjunction. The hierarchical construction of the proposed truth domains makes it possible to introduce new logical operations in consideration that do not have analogues in the classical logic.


2019 ◽  
Vol 9 (16) ◽  
pp. 3391 ◽  
Author(s):  
Santiago Pascual ◽  
Joan Serrà ◽  
Antonio Bonafonte

Conversion from text to speech relies on the accurate mapping from linguistic to acoustic symbol sequences, for which current practice employs recurrent statistical models such as recurrent neural networks. Despite the good performance of such models (in terms of low distortion in the generated speech), their recursive structure with intermediate affine transformations tends to make them slow to train and to sample from. In this work, we explore two different mechanisms that enhance the operational efficiency of recurrent neural networks, and study their performance–speed trade-off. The first mechanism is based on the quasi-recurrent neural network, where expensive affine transformations are removed from temporal connections and placed only on feed-forward computational directions. The second mechanism includes a module based on the transformer decoder network, designed without recurrent connections but emulating them with attention and positioning codes. Our results show that the proposed decoder networks are competitive in terms of distortion when compared to a recurrent baseline, whilst being significantly faster in terms of CPU and GPU inference time. The best performing model is the one based on the quasi-recurrent mechanism, reaching the same level of naturalness as the recurrent neural network based model with a speedup of 11.2 on CPU and 3.3 on GPU.


Entropy ◽  
2019 ◽  
Vol 21 (5) ◽  
pp. 464 ◽  
Author(s):  
Alexander Koplenig ◽  
Sascha Wolfer ◽  
Carolin Müller-Spitzer

Recently, it was demonstrated that generalized entropies of order α offer novel and important opportunities to quantify the similarity of symbol sequences where α is a free parameter. Varying this parameter makes it possible to magnify differences between different texts at specific scales of the corresponding word frequency spectrum. For the analysis of the statistical properties of natural languages, this is especially interesting, because textual data are characterized by Zipf’s law, i.e., there are very few word types that occur very often (e.g., function words expressing grammatical relationships) and many word types with a very low frequency (e.g., content words carrying most of the meaning of a sentence). Here, this approach is systematically and empirically studied by analyzing the lexical dynamics of the German weekly news magazine Der Spiegel (consisting of approximately 365,000 articles and 237,000,000 words that were published between 1947 and 2017). We show that, analogous to most other measures in quantitative linguistics, similarity measures based on generalized entropies depend heavily on the sample size (i.e., text length). We argue that this makes it difficult to quantify lexical dynamics and language change and show that standard sampling approaches do not solve this problem. We discuss the consequences of the results for the statistical analysis of languages.


Author(s):  
Shinya Nawata ◽  
Atsuto Maki ◽  
Takashi Hikihara

A power packet is a unit of electric power composed of a power pulse and an information tag. In Shannon’s information theory, messages are represented by symbol sequences in a digitized manner. Referring to this formulation, we define symbols in power packetization as a minimum unit of power transferred by a tagged pulse. Here, power is digitized and quantized. In this paper, we consider packetized power in networks for a finite duration, giving symbols and their energies to the networks. A network structure is defined using a graph whose nodes represent routers, sources and destinations. First, we introduce the concept of a symbol propagation matrix (SPM) in which symbols are transferred at links during unit times. Packetized power is described as a network flow in a spatio-temporal structure. Then, we study the problem of selecting an SPM in terms of transferability, that is, the possibility to represent given energies at sources and destinations during the finite duration. To select an SPM, we consider a network flow problem of packetized power. The problem is formulated as an M-convex submodular flow problem which is a solvable generalization of the minimum cost flow problem. Finally, through examples, we verify that this formulation provides reasonable packetized power.


Sign in / Sign up

Export Citation Format

Share Document