scalable architectures
Recently Published Documents


TOTAL DOCUMENTS

50
(FIVE YEARS 6)

H-INDEX

7
(FIVE YEARS 1)

Author(s):  
Javier Garcia-Barcos ◽  
Ruben Martinez-Cantin

Bayesian optimization has become a popular method for applications, like the design of computer experiments or hyperparameter tuning of expensive models, where sample efficiency is mandatory. These situations or high-throughput computing, where distributed and scalable architectures are a necessity. However, Bayesian optimization is mostly sequential. Even parallel variants require certain computations between samples, limiting the parallelization bandwidth. Thompson sampling has been previously applied for distributed Bayesian optimization. But, when compared with other acquisition functions in the sequential setting, Thompson sampling is known to perform suboptimally. In this paper, we present a new method for fully distributed Bayesian optimization, which can be combined with any acquisition function. Our approach considers Bayesian optimization as a partially observable Markov decision process. In this context, stochastic policies, such as the Boltzmann policy, have some interesting properties which can also be studied for Bayesian optimization. Furthermore, the Boltzmann policy trivially allows a distributed Bayesian optimization implementation with high level of parallelism and scalability. We present results in several benchmarks and applications that shows the performance of our method.


2019 ◽  
Vol 2019 ◽  
pp. 1-11 ◽  
Author(s):  
Awos Kanan ◽  
Fayez Gebali ◽  
Atef Ibrahim ◽  
Kin Fun Li

Processor array architectures have been employed, as an accelerator, to compute similarity distance found in a variety of data mining algorithms. However, most of the proposed architectures in the existing literature are designed in an ad hoc manner without taking into consideration the size and dimensionality of the datasets. Furthermore, data dependencies have not been analyzed, and often, only one design choice is considered for the scheduling and mapping of computational tasks. In this work, we present a systematic methodology to design scalable and area-efficient linear (1-D) processor arrays for the computation of similarity distance matrices. Six possible design options are obtained and analyzed in terms of area and time complexities. The obtained architectures provide us with the flexibility to choose the one that meets hardware constraints for a specific problem size. Comparisons with the previously reported architectures demonstrate that one of the proposed architectures achieves less area and area-delay product besides its scalability to high-dimensional data.


10.29007/j5cs ◽  
2019 ◽  
Author(s):  
Evaldo Costa ◽  
Gabriel Silva ◽  
Marcello Teixeira

In bioinformatics, DNA sequence assembly refers to the reconstruction of an original DNA sequence by the alignment and merging of fragments that can be obtained from several sequencing methods. The main sequencing methods process thousands or even millions of these fragments, which can be short (hundreds of base pairs) or long (thousands of base pairs) read sequences. This is a highly computational task, which usually requires the use of parallel programs and algorithms, so that it can be performed with desirable accuracy and within suitable time limits. In this paper, we evaluate the performance of DALIGNER long read sequences aligner in a system using the Intel Xeon Phi 7210 processor. We are looking for scalable architectures that could provide a higher throughput that can be applied to future sequencing technologies.


Author(s):  
James McGettrick ◽  
Trystan Watson ◽  
Katherine Hooper ◽  
Adam Pockett ◽  
Matthew Carnie ◽  
...  

2017 ◽  
Vol 26 (5) ◽  
pp. 2230-2245 ◽  
Author(s):  
Cesar Carranza ◽  
Daniel Llamocca ◽  
Marios Pattichis

Sign in / Sign up

Export Citation Format

Share Document