sequence processing
Recently Published Documents


TOTAL DOCUMENTS

232
(FIVE YEARS 32)

H-INDEX

23
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Sweta Kumari ◽  
C Vigneswaran ◽  
V. Srinivasa Chakrava

Abstract Sequential decision making tasks that require information integration over extended durations of time are challenging for several reasons including the problem of vanishing gradients, long training times and significant memory requirements. To this end we propose a neuron model fashioned after the JK flip-flops in digital systems. A flip-flop is a sequential device that can store state information of the previous history. We incorporate the JK flip-flop neuron into several deep network architectures and apply the networks to difficult sequence processing problems. The proposed architectures include flip-flop neural networks (FFNNs), bidirectional flip-flop neural networks (BiFFNNs), convolutional flip-flop neural networks (ConvFFNNs), and bidirectional convolutional flip-flop neural networks (BiConvFFNNs). Learning rules of proposed architectures have also been derived. We have considered the most popular benchmark sequential tasks like signal generation, sentiment analysis, handwriting generation, text generation, video frame prediction, lung volume prediction, and action recognition to evaluate the proposed networks. Finally, we compare the results of our networks with the results from analogous networks with Long Short-Term Memory (LSTM) neurons on the same sequential tasks. Our results show that the JK flip-flop networks outperform the LSTM networks significantly or marginally on all the tasks, with only half of the trainable parameters.


2021 ◽  
Author(s):  
Sweta Kumari ◽  
Vigneswaran C ◽  
V. Srinivasa Chakravarthy

Sequential decision making tasks that require information integration over extended durations of time are challenging for several reasons including the problem of vanishing gradients, long training times and significant memory requirements. To this end we propose a neuron model fashioned after the JK flip-flops in digital systems. A flip-flop is a sequential device that can store state information of the previous history. We incorporate the JK flip-flop neuron into several deep network architectures and apply the networks to difficult sequence processing problems. The proposed architectures include flip-flop neural networks (FFNNs), bidirectional flip-flop neural networks (BiFFNNs), convolutional flip-flop neural networks (ConvFFNNs), and bidirectional convolutional flip-flop neural networks (BiConvFFNNs). Learning rules of proposed architectures have also been derived. We have considered the most popular benchmark sequential tasks like signal generation, sentiment analysis, handwriting generation, text generation, video frame prediction, lung volume prediction, and action recognition to evaluate the proposed networks. Finally, we compare the results of our networks with the results from analogous networks with Long Short-Term Memory (LSTM) neurons on the same sequential tasks. Our results show that the JK flip-flop networks outperform the LSTM networks significantly or marginally on all the tasks, with only half of the trainable parameters.


2021 ◽  
pp. 1-11
Author(s):  
Zheng Ye ◽  
Henrike Hanssen ◽  
Julia Steinhardt ◽  
Volker Tronnier ◽  
Dirk Rasche ◽  
...  

Background: Maintaining and manipulating sequences online is essential for language and memory. In Parkinson’s disease (PD), poor performance in sequencing tasks has been associated with basal ganglia dysfunction, especially subthalamic hyperactivity. Objective: This study is aimed to investigate the impact of high-frequency subthalamic nucleus (STN) deep brain stimulation (DBS) on sequence processing in PD. Methods: Twenty-nine patients with PD (17 women) completed a ‘before/after’ sentence task and a digit ordering task with STN DBS ON and OFF. In the sentence task, patients read a sequence of events expressed in the actual order of occurrence (‘after’ sentences) or reversed order (‘before’ sentences) for comprehension. In the digit task, patients recalled a sequence of ordered digits (ordered trials) or reordered and recalled random digits in ascending order (random trials). Volumes of tissue activated (VTAs) were estimated for the motor and associative STN. Results: Patients were slower with STN DBS ON versus OFF in both tasks, although their motor symptoms were significantly improved under DBS. In the sentence task, patients showed higher ordering-related reaction time costs (‘before’ >  ‘after’) with DBS ON versus OFF. Moreover, patients with larger left associative VTAs, smaller total motor VTAs, and more daily exposure to dopaminergic drugs tended to show larger reaction time cost increases under DBS. In the digit ordering task, patients with too large or too small right associative VTAs tended to show larger reaction time cost increases under DBS. Conclusion: Stimulating the STN, especially its associative part, might impair sequence processing in language and memory.


2021 ◽  
pp. 13-21
Author(s):  
David GINAT

A figure may convey an idea, an argument and even a proof, sometimes better than words. It may also elicit an idea, an argument and a proof. In problem solving, a figure may give a “feel” of a problem. A self-generated figure may help getting insight, or serve as a means for representing one’s inner associations, or mental model of the problem. This paper presents selfgenerated figures in algorithmic problem solving. Students of our IOI advanced stage demonstrated constructive utilization of self-generated figures in solving challenging sequence processing tasks. The figures elicited associations of hidden patterns, whose recognition yielded elegant and efficient algorithmic solutions. We advocate the application and examination of self-generated figures in algorithmic problem solving.


Author(s):  
You Zou ◽  
Yuejie Zhu ◽  
Yaohang Li ◽  
Fang-Xiang Wu ◽  
Jianxin Wang

Abstract The rapid increase of genome data brought by gene sequencing technologies poses a massive challenge to data processing. To solve the problems caused by enormous data and complex computing requirements, researchers have proposed many methods and tools which can be divided into three types: big data storage, efficient algorithm design and parallel computing. The purpose of this review is to investigate popular parallel programming technologies for genome sequence processing. Three common parallel computing models are introduced according to their hardware architectures, and each of which is classified into two or three types and is further analyzed with their features. Then, the parallel computing for genome sequence processing is discussed with four common applications: genome sequence alignment, single nucleotide polymorphism calling, genome sequence preprocessing, and pattern detection and searching. For each kind of application, its background is firstly introduced, and then a list of tools or algorithms are summarized in the aspects of principle, hardware platform and computing efficiency. The programming model of each hardware and application provides a reference for researchers to choose high-performance computing tools. Finally, we discuss the limitations and future trends of parallel computing technologies.


2020 ◽  
Vol 6 ◽  
pp. 60-73
Author(s):  
László Drienkó

The present study reports results from a series of computer experiments seeking to combine word-based Largest Chunk (LCh) segmentation and Agreement Groups (AG) sequence processing. The AG model is based on groups of similar utterances that enable combinatorial mapping of novel utterances. LCh segmentation is concerned with cognitive text segmentation, i.e. with detecting word boundaries in a sequence of linguistic symbols. Our observations are based on the text of Le petit prince (The little prince) by Antoine de Saint-Exupéry in three languages: French, English, and Hungarian. The data suggest that word-based LCh segmentation is not very efficient with respect to utterance boundaries, however, it can provide useful word combinations for AG processing. Typological differences between the languages are also reflected in the results.


Author(s):  
Dong Xu ◽  
Zhuchou Lu ◽  
Kangming Jin ◽  
Wenmin Qiu ◽  
Guirong Qiao ◽  
...  

AbstractEfficiently extracting information from biological big data can be a huge challenge for people (especially those who lack programming skills). We developed Sequence Processing and Data Extraction (SPDE) as an integrated tool for sequence processing and data extraction for gene family and omics analyses. Currently, SPDE has seven modules comprising 100 basic functions that range from single gene processing (e.g., translation, reverse complement, and primer design) to genome information extraction. All SPDE functions can be used without the need for programming or command lines. The SPDE interface has enough prompt information to help users run SPDE without barriers. In addition to its own functions, SPDE also incorporates the publicly available analyses tools (such as, NCBI-blast, HMMER, Primer3 and SAMtools), thereby making SPDE a comprehensive bioinformatics platform for big biological data analysis.AvailabilitySPDE was built using Python and can be run on 32-bit, 64-bit Windows and macOS systems. It is an open-source software that can be downloaded from https://github.com/simon19891216/[email protected]


Sign in / Sign up

Export Citation Format

Share Document