A Database System for High-Throughput Transposon Display Analyses of Rice

2006 ◽  
Vol 16 (1) ◽  
pp. 28-38
Author(s):  
Etsuko INOUE ◽  
Takuya YOSHIHIRO ◽  
Hideya KAWAJI ◽  
Akira HORIBATA ◽  
Masaru NAKAGAWA
2016 ◽  
Vol 4 (3) ◽  
pp. 285-296 ◽  
Author(s):  
Sampath Perumal ◽  
Nomar Espinosa Waminal ◽  
Jonghoon Lee ◽  
Nur Kholilatul Izzah ◽  
Mina Jin ◽  
...  

2001 ◽  
Vol 40 (2) ◽  
pp. 464-486 ◽  
Author(s):  
J. T. Inman ◽  
H. R. Flores ◽  
G. D. May ◽  
J. W. Weller ◽  
C. J. Bell

2012 ◽  
Vol 29 (2) ◽  
pp. 290-291 ◽  
Author(s):  
Nozomu Sakurai ◽  
Takeshi Ara ◽  
Shigehiko Kanaya ◽  
Yukiko Nakamura ◽  
Yoko Iijima ◽  
...  

2020 ◽  
Vol 13 (10) ◽  
pp. 1696-1708
Author(s):  
Robin Rehrmann ◽  
Carsten Binnig ◽  
Alexander Böhm ◽  
Kihong Kim ◽  
Wolfgang Lehner

OLTP applications are usually executed by a high number of clients in parallel and are typically faced with high throughput demand as well as a constraint latency requirement for individual statements. Interestingly, OLTP workloads are often read-heavy and comprise similar query patterns, which provides a potential to share work of statements belonging to different transactions. Consequently, OLAP techniques for sharing work have started to be applied also to OLTP workloads, lately. In this paper, we present an approach for merging read statements within interactively submitted multi-statement transactions consisting of reads and writes. We first define a formal framework for merging transactions running under a given isolation level and provide insights into a prototypical implementation of merging within a commercial database system. In our experimental evaluation, we show that, depending on the isolation level, the load in the system and the read-share of the workload, an improvement of the transaction throughput by up to a factor of 2.5X is possible without compromising the transactional semantics.


2015 ◽  
Vol 2015 ◽  
pp. 1-7 ◽  
Author(s):  
Rodrigo Aniceto ◽  
Rene Xavier ◽  
Valeria Guimarães ◽  
Fernanda Hondo ◽  
Maristela Holanda ◽  
...  

Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB.


2007 ◽  
Vol 177 (4S) ◽  
pp. 52-53
Author(s):  
Stefano Ongarello ◽  
Eberhard Steiner ◽  
Regina Achleitner ◽  
Isabel Feuerstein ◽  
Birgit Stenzel ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document