scholarly journals A randomized approach to speed up the analysis of large-scale read-count data in the application of CNV detection

2018 ◽  
Vol 19 (1) ◽  
Author(s):  
WeiBo Wang ◽  
Wei Sun ◽  
Wei Wang ◽  
Jin Szatkiewicz
Author(s):  
Ruiyang Song ◽  
Kuang Xu

We propose and analyze a temporal concatenation heuristic for solving large-scale finite-horizon Markov decision processes (MDP), which divides the MDP into smaller sub-problems along the time horizon and generates an overall solution by simply concatenating the optimal solutions from these sub-problems. As a “black box” architecture, temporal concatenation works with a wide range of existing MDP algorithms. Our main results characterize the regret of temporal concatenation compared to the optimal solution. We provide upper bounds for general MDP instances, as well as a family of MDP instances in which the upper bounds are shown to be tight. Together, our results demonstrate temporal concatenation's potential of substantial speed-up at the expense of some performance degradation.


Land ◽  
2021 ◽  
Vol 10 (8) ◽  
pp. 792
Author(s):  
Shukun Wang ◽  
Dengwang Li ◽  
Tingting Li ◽  
Changquan Liu

Land fragmentation (LF) is widespread worldwide and affects farmers’ decision-making and, thus, farm performance. We used detailed household survey data at the crop level from ten provinces in China to construct four LF indicators and six farm performance indicators. We ran a set of regression models using OLS methods to analyse the relationship between LF and farm performance. The results showed that (1) LF increased the input of production material and labour costs; (2) LF reduced farmers’ purchasing of mechanical services and the efficiency of ploughing; and (3) LF may increase technical efficiency (this result, however, was not sufficiently robust and had no effect on yield). Generally speaking, LF was negatively related to farm performance. To improve farm performance, it is recommended that decision-makers speed up land transfer and land consolidation, stabilise land property rights, establish land-transfer intermediary organisations and promote large-scale production.


2018 ◽  
Vol 8 (1) ◽  
pp. 18
Author(s):  
Kees Bourgonje ◽  
Hubert J. Veringa ◽  
David M.J. Smeulders ◽  
Jeroen A. van Oijen

To speed up the torrefaction process in traditional torrefaction reactors, in particular auger reactors, the temperature of the reactor is substantially higher than the required torrefaction process temperature. This is due to the low heat conductivity of biomass. Unfortunately, the off-gas characteristics of biomass are very sensitive in the temperature window of 180-300°C which can cause a thermal runaway situation in which the process temperature exceeds the intended level. Due to this very sensitive temperature dependence of biomass pyrolysis and its accompanying gas production, a potential solution is to inject small amounts of air directly into the torrefaction reactor. It is found experimentally that this air injection can regulate the temperature of the biomass very rapidly compared to traditional temperature regulation by changing the reactor wall temperature. With this new torrefaction temperature control method, thermal runaway situations can be avoided and the temperature of the biomass in the reactor can be regulated better. Experiments with large beech wood samples show that the torrefaction reaction rate and the temperature in the core of the sample depend on the amount of injected air. Since the flow of combustible gasses (torr-gas) originating from the torrefaction process is very sensitive to temperature, the heat production by combusting the torr-gas can be controlled to some extent. This will result in both a more homogeneous torrefied product as well as a more stable processing of varying biomass types in large-scale torrefaction systems.


2018 ◽  
Vol 16 (06) ◽  
pp. 1850052
Author(s):  
Y. H. Lee ◽  
M. Khalil-Hani ◽  
M. N. Marsono

While physical realization of practical large-scale quantum computers is still ongoing, theoretical research of quantum computing applications is facilitated on classical computing platforms through simulation and emulation methods. Nevertheless, the exponential increase in resource requirement with the increase in the number of qubits is an inherent issue in classical modeling of quantum systems. In the effort to alleviate the critical scalability issue in existing FPGA emulation works, a novel FPGA-based quantum circuit emulation framework based on Heisenberg representation is proposed in this paper. Unlike previous works that are restricted to the emulations of quantum circuits of small qubit sizes, the proposed FPGA emulation framework can scale-up to 120-qubit on Altera Stratix IV FPGA for the stabilizer circuit case study while providing notable speed-up over the equivalent simulation model.


2022 ◽  
Vol 15 (2) ◽  
pp. 1-33
Author(s):  
Mikhail Asiatici ◽  
Paolo Ienne

Applications such as large-scale sparse linear algebra and graph analytics are challenging to accelerate on FPGAs due to the short irregular memory accesses, resulting in low cache hit rates. Nonblocking caches reduce the bandwidth required by misses by requesting each cache line only once, even when there are multiple misses corresponding to it. However, such reuse mechanism is traditionally implemented using an associative lookup. This limits the number of misses that are considered for reuse to a few tens, at most. In this article, we present an efficient pipeline that can process and store thousands of outstanding misses in cuckoo hash tables in on-chip SRAM with minimal stalls. This brings the same bandwidth advantage as a larger cache for a fraction of the area budget, because outstanding misses do not need a data array, which can significantly speed up irregular memory-bound latency-insensitive applications. In addition, we extend nonblocking caches to generate variable-length bursts to memory, which increases the bandwidth delivered by DRAMs and their controllers. The resulting miss-optimized memory system provides up to 25% speedup with 24× area reduction on 15 large sparse matrix-vector multiplication benchmarks evaluated on an embedded and a datacenter FPGA system.


MATEMATIKA ◽  
2019 ◽  
Vol 35 (3) ◽  
Author(s):  
Nor Afifah Hanim Zulkefli ◽  
Yeak Su Hoe ◽  
Munira Ismail

In numerical methods, boundary element method has been widely used to solve acoustic problems. However, it suffers from certain drawbacks in terms of computational efficiency. This prevents the boundary element method from being applied to large-scale problems. This paper presents proposal of a new multiscale technique, coupled with boundary element method to speed up numerical calculations. Numerical example is given to illustrate the efficiency of the proposed method. The solution of the proposed method has been validated with conventional boundary element method and the proposed method is indeed faster in computation.


2022 ◽  
Vol 16 (4) ◽  
pp. 1-33
Author(s):  
Danlu Liu ◽  
Yu Li ◽  
William Baskett ◽  
Dan Lin ◽  
Chi-Ren Shyu

Risk patterns are crucial in biomedical research and have served as an important factor in precision health and disease prevention. Despite recent development in parallel and high-performance computing, existing risk pattern mining methods still struggle with problems caused by large-scale datasets, such as redundant candidate generation, inability to discover long significant patterns, and prolonged post pattern filtering. In this article, we propose a novel dynamic tree structure, Risk Hierarchical Pattern Tree (RHPTree), and a top-down search method, RHPSearch, which are capable of efficiently analyzing a large volume of data and overcoming the limitations of previous works. The dynamic nature of the RHPTree avoids costly tree reconstruction for the iterative search process and dataset updates. We also introduce two specialized search methods, the extended target search (RHPSearch-TS) and the parallel search approach (RHPSearch-SD), to further speed up the retrieval of certain items of interest. Experiments on both UCI machine learning datasets and sampled datasets of the Simons Foundation Autism Research Initiative (SFARI)—Simon’s Simplex Collection (SSC) datasets demonstrate that our method is not only faster but also more effective in identifying comprehensive long risk patterns than existing works. Moreover, the proposed new tree structure is generic and applicable to other pattern mining problems.


2007 ◽  
Vol 1 (1) ◽  
pp. 41-76 ◽  
Author(s):  
R. Greve ◽  
S. Otsu

Abstract. The north-east Greenland ice stream (NEGIS) was discovered as a large fast-flow feature of the Greenland ice sheet by synthetic aperture radar (SAR) imaginary of the ERS-1 satellite. In this study, the NEGIS is implemented in the dynamic/thermodynamic, large-scale ice-sheet model SICOPOLIS (Simulation Code for POLythermal Ice Sheets). In the first step, we simulate the evolution of the ice sheet on a 10-km grid for the period from 250 ka ago until today, driven by a climatology reconstructed from a combination of present-day observations and GCM results for the past. We assume that the NEGIS area is characterized by enhanced basal sliding compared to the "normal", slowly-flowing areas of the ice sheet, and find that the misfit between simulated and observed ice thicknesses and surface velocities is minimized for a sliding enhancement by the factor three. In the second step, the consequences of the NEGIS, and also of surface-meltwater-induced acceleration of basal sliding, for the possible decay of the Greenland ice sheet in future warming climates are investigated. It is demonstrated that the ice sheet is generally very susceptible to global warming on time-scales of centuries and that surface-meltwater-induced acceleration of basal sliding can speed up the decay significantly, whereas the NEGIS is not likely to dynamically destabilize the ice sheet as a whole.


PLoS ONE ◽  
2019 ◽  
Vol 14 (2) ◽  
pp. e0205474 ◽  
Author(s):  
Stijn Hawinkel ◽  
Frederiek-Maarten Kerckhof ◽  
Luc Bijnens ◽  
Olivier Thas

Sign in / Sign up

Export Citation Format

Share Document