memory reduction
Recently Published Documents


TOTAL DOCUMENTS

121
(FIVE YEARS 25)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Dan Flomin ◽  
David Pellow ◽  
Ron Shamir

AbstractThe rapid, continuous growth of deep sequencing experiments requires development and improvement of many bioinformatics applications for analysis of large sequencing datasets, including k-mer counting and assembly. Several applications reduce RAM usage by binning sequences. Binning is done by employing minimizer schemes, which rely on a specific order of the minimizers. It has been demonstrated that the choice of the order has a major impact on the performance of the applications. Here we introduce a method for tailoring the order to the dataset. Our method repeatedly samples the dataset and modifies the order so as to flatten the k-mer load distribution across minimizers. We integrated our method into Gerbil, a state-of-the-art memory efficient k-mer counter, and were able to reduce its memory footprint by 50% or more for large k, with only minor increase in runtime. Our tests also showed that the orders produced by our method produced superior results when transferred across datasets from the same species, with little or no order change. This enables memory reduction with essentially no increase in runtime.


2021 ◽  
Author(s):  
Zahra Atashgahi ◽  
Ghada Sokar ◽  
Tim van der Lee ◽  
Elena Mocanu ◽  
Decebal Constantin Mocanu ◽  
...  

AbstractMajor complications arise from the recent increase in the amount of high-dimensional data, including high computational costs and memory requirements. Feature selection, which identifies the most relevant and informative attributes of a dataset, has been introduced as a solution to this problem. Most of the existing feature selection methods are computationally inefficient; inefficient algorithms lead to high energy consumption, which is not desirable for devices with limited computational and energy resources. In this paper, a novel and flexible method for unsupervised feature selection is proposed. This method, named QuickSelection (The code is available at: https://github.com/zahraatashgahi/QuickSelection), introduces the strength of the neuron in sparse neural networks as a criterion to measure the feature importance. This criterion, blended with sparsely connected denoising autoencoders trained with the sparse evolutionary training procedure, derives the importance of all input features simultaneously. We implement QuickSelection in a purely sparse manner as opposed to the typical approach of using a binary mask over connections to simulate sparsity. It results in a considerable speed increase and memory reduction. When tested on several benchmark datasets, including five low-dimensional and three high-dimensional datasets, the proposed method is able to achieve the best trade-off of classification and clustering accuracy, running time, and maximum memory usage, among widely used approaches for feature selection. Besides, our proposed method requires the least amount of energy among the state-of-the-art autoencoder-based feature selection methods.


2021 ◽  
Author(s):  
Jae Hong Roh ◽  
Useok Lee ◽  
Yongje Lee ◽  
Myung Hoon Sunwoo

2021 ◽  
Vol 10 (10) ◽  
pp. 643
Author(s):  
Yuhao Huo ◽  
Anran Yang ◽  
Qingren Jia ◽  
Yebin Chen ◽  
Biao He ◽  
...  

Oblique photogrammetry models are indispensable for implementing digital twins of cities. Geographic information system researchers have proposed plenty of methods to load and visualize these city-scaled scenes. However, when the area viewed changes quickly in real-time rendering, current methods still require excessive GPU calculation and memory occupation. In this study, we propose a data organization method in which we merged all quadtrees and used a binary encoding method to encode nodes in a merged tree so that the parent–child relationship between the tree nodes could be calculated using rapid binary operations. After that, we developed a strategy to cancel the loading of redundant nodes based on the parent–child relationship, which helped to reduce the hard disk loading time and the amount of memory occupied in visualization. Moreover, we introduced a parameter to measure the area of the triangle mesh per pixel to achieve unified data scheduling under different production standards. We implemented our method based on Unreal Engine (UE), and three experiments were designed to illustrate the advantages of our methods in index acceleration, frame time, and memory reduction. The results show that our methods can significantly improve visualization fluency and reduce memory usage.


2021 ◽  
Author(s):  
Weile Wei ◽  
Eduardo D'Azevedo ◽  
Kevin Huck ◽  
Arghya Chatterjee ◽  
Oscar Hernandez ◽  
...  

2021 ◽  
Vol 17 (4) ◽  
pp. 1-11
Author(s):  
Wentao Chen ◽  
Hailong Qiu ◽  
Jian Zhuang ◽  
Chutong Zhang ◽  
Yu Hu ◽  
...  

Deep neural networks have demonstrated their great potential in recent years, exceeding the performance of human experts in a wide range of applications. Due to their large sizes, however, compression techniques such as weight quantization and pruning are usually applied before they can be accommodated on the edge. It is generally believed that quantization leads to performance degradation, and plenty of existing works have explored quantization strategies aiming at minimum accuracy loss. In this paper, we argue that quantization, which essentially imposes regularization on weight representations, can sometimes help to improve accuracy. We conduct comprehensive experiments on three widely used applications: fully connected network for biomedical image segmentation, convolutional neural network for image classification on ImageNet, and recurrent neural network for automatic speech recognition, and experimental results show that quantization can improve the accuracy by 1%, 1.95%, 4.23% on the three applications respectively with 3.5x-6.4x memory reduction.


2021 ◽  
pp. 1060-1068
Author(s):  
Н. М. Залуцкая ◽  
Н. А. Гомзякова ◽  
Д. М. Сарайкин ◽  
Н. И. Ананьева ◽  
Н. Г. Незнанов

При помощи Адденбрукской когнитивной шкалы (ACE-III), теста Струпа (ТС), Шкалы памяти Векслера (WMS) и Батареи лобной дисфункции (FAB) нами были обследованы 44 респондента практически здоровой «возрастной нормы» 52-95 лет. В зависимости от возраста выборка была разделена на две группы - в 1-ю вошли лица младше 65 лет (64 года включительно), 2-ю составили испытуемые старше 65 лет. Статистически достоверные различия результатов обследования респондентов двух групп посредством ACE-III обнаружены по показателю память и общему баллу методики, при этом по мере увеличения возраста снижался уровень показателей когнитивного функционирования, измеренных посредством ACE-III. Результаты сравнения данных обследования при помощи ТС свидетельствуют о снижении темпа работы в условиях нагрузки и ослаблении гибкости организации мыслительной деятельности и концентрации внимания, а также о повышенной интерференции у лиц старшей возрастной группы, обследованной нами. Корреляционный анализ данных ТС и возраста обследованных показал, что с возрастом происходит снижение когнитивного контроля над обработкой информации, нарастают интерферирующие воздействия, снижается точность и темп деятельности, а сама деятельность становится более ригидной. Результаты корреляционного анализа показателей WMS и возраста продемонстрировали снижение уровня психического контроля над деятельностью, ухудшение памяти в зрительной модальности, нарастание снижения оперативной памяти по мере его увеличения. По мере старения у обследованных здоровых испытуемых обнаружено ухудшение лобных (регуляторных) функций, оцененных при помощи FAB. Using the Addenbrooke’s Cognitive Examination (ACE-III), the Stroop Test (ST), the Wechsler Memory Scale (WMS), and the Frontal Assessment Battery (FAB), we examined 44 respondents of an almost healthy «age norm» from 52 to 95 years old. Depending on age, the sample was divided into 2 groups, the first group included people under the age of 65 years (64 years old inclusive), the second group consisted of subjects over 65 years old. Statistically significant differences in the results of the survey of respondents of the two groups by the ACE-III were found in Memory and Total score indicators, while the level of cognitive functioning measured by the ACE-III decreased with age. The results of comparing the survey data using the Stroop Test indicate a decrease in the pace of work under load conditions and a weakening in the flexibility of organization of mental activity and concentration of attention, as well as increased interference in individuals of a more age group examined by us. Correlation analysis of the Stroop test data and the age of the examined showed cognitive control over information processing decreases, interfering influences increase, accuracy and pace of activity decrease, and the activity itself becomes more rigidas age increases. The results of the correlation analysis of the indicators of the WMS and age demonstrated a decrease in the level of mental control over activity, a deterioration of memory in the visual modality, and a progressive working memory reduction as age increases. With the growth of age, a decrease in frontal (executive) functions of healthy subjects, evaluated by the FAB, was found.


2021 ◽  
Author(s):  
Magdalena E. G. Hofmann ◽  
Zhiwei Lin ◽  
Jan Woźniak ◽  
Keren Drori

<p>Oxygen (<sup>18</sup>O/<sup>16</sup>O) and deuterium (D/H) isotopes are a widespread tool to trace physical and chemical processes in hydrology and biogeosciences. Precision and throughput are key parameters for water isotope analysis. Here, we will present two new methodologies for the Picarro L2130-i Cavity Ring-Down Spectroscopy (CRDS) water isotope analyzer that allow to increase the throughput with no compromise of data quality.</p><p>The Picarro Express Method now distinguishes between a memory reduction stage and a sample analysis stage and allows to measure up to 50 samples per day while maintaining the excellent precision of CRDS (i.e. 0.01‰ for δ<sup>18</sup>O and 0.05‰ for δD). This corresponds to doubling the throughput compared to the standard Picarro methodology. The Picarro Survey Method makes use of ultrafast injections and sorts the samples by their measured isotopic values, enabling a powerful new strategy to reduce memory effects.</p><p>We will discuss different measurement strategies to increase the throughput for routine water isotope analysis. The improved methodologies do not require any hardware changes and are solely based on modifications of the injection procedure. If you are interested in Picarro’s off-the-shelve solution for increasing productivity of your existing and future installations, please visit the Picarro vEGU 2021 booth for a free voucher.</p>


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Moustafa Shokrof ◽  
C. Titus Brown ◽  
Tamer A. Mansour

Abstract Background Specialized data structures are required for online algorithms to efficiently handle large sequencing datasets. The counting quotient filter (CQF), a compact hashtable, can efficiently store k-mers with a skewed distribution. Result Here, we present the mixed-counters quotient filter (MQF) as a new variant of the CQF with novel counting and labeling systems. The new counting system adapts to a wider range of data distributions for increased space efficiency and is faster than the CQF for insertions and queries in most of the tested scenarios. A buffered version of the MQF can offload storage to disk, trading speed of insertions and queries for a significant memory reduction. The labeling system provides a flexible framework for assigning labels to member items while maintaining good data locality and a concise memory representation. These labels serve as a minimal perfect hash function but are ~ tenfold faster than BBhash, with no need to re-analyze the original data for further insertions or deletions. Conclusions The MQF is a flexible and efficient data structure that extends our ability to work with high throughput sequencing data.


Sign in / Sign up

Export Citation Format

Share Document