Memory Usage
Recently Published Documents


TOTAL DOCUMENTS

366
(FIVE YEARS 186)

H-INDEX

20
(FIVE YEARS 6)

2022 ◽  
Vol 9 ◽  
Author(s):  
Bangyu Wu ◽  
Wenzhuo Tan ◽  
Wenhao Xu ◽  
Bo Li

The large computational memory requirement is an important issue in 3D large-scale wave modeling, especially for GPU calculation. Based on the observation that wave propagation velocity tends to gradually increase with depth, we propose a 3D trapezoid-grid finite-difference time-domain (FDTD) method to achieve the reduction of memory usage without a significant increase of computational time or a decrease of modeling accuracy. It adopts the size-increasing trapezoid-grid mesh to fit the increasing trend of seismic wave velocity in depth, which can significantly reduce the oversampling in the high-velocity region. The trapezoid coordinate transformation is used to alleviate the difficulty of processing ununiform grids. We derive the 3D acoustic equation in the new trapezoid coordinate system and adopt the corresponding trapezoid-grid convolutional perfectly matched layer (CPML) absorbing boundary condition to eliminate the artificial boundary reflection. Stability analysis is given to generate stable modeling results. Numerical tests on the 3D homogenous model verify the effectiveness of our method and the trapezoid-grid CPML absorbing boundary condition, while numerical tests on the SEG/EAGE overthrust model indicate that for comparable computational time and accuracy, our method can achieve about 50% reduction on memory usage compared with those on the uniform-grid FDTD method.


Electronics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 139
Author(s):  
Juneseo Chang ◽  
Myeongjin Kang ◽  
Daejin Park

Smart homes assist users by providing convenient services from activity classification with the help of machine learning (ML) technology. However, most of the conventional high-performance ML algorithms require relatively high power consumption and memory usage due to their complex structure. Moreover, previous studies on lightweight ML/DL models for human activity classification still require relatively high resources for extremely resource-limited embedded systems; thus, they are inapplicable for smart homes’ embedded system environments. Therefore, in this study, we propose a low-power, memory-efficient, high-speed ML algorithm for smart home activity data classification suitable for an extremely resource-constrained environment. We propose a method for comprehending smart home activity data as image data, hence using the MNIST dataset as a substitute for real-world activity data. The proposed ML algorithm consists of three parts: data preprocessing, training, and classification. In data preprocessing, training data of the same label are grouped into further detailed clusters. The training process generates hyperplanes by accumulating and thresholding from each cluster of preprocessed data. Finally, the classification process classifies input data by calculating the similarity between the input data and each hyperplane using the bitwise-operation-based error function. We verified our algorithm on `Raspberry Pi 3’ and `STM32 Discovery board’ embedded systems by loading trained hyperplanes and performing classification on 1000 training data. Compared to a linear support vector machine implemented from Tensorflow Lite, the proposed algorithm improved memory usage to 15.41%, power consumption to 41.7%, performance up to 50.4%, and power per accuracy to 39.2%. Moreover, compared to a convolutional neural network model, the proposed model improved memory usage to 15.41%, power consumption to 61.17%, performance to 57.6%, and power per accuracy to 55.4%.


Author(s):  
Prerana Shenoy S. P. ◽  
Sai Vishnu Soudri ◽  
Ramakanth Kumar P. ◽  
Sahana Bailuguttu

Observability is the ability for us to monitor the state of the system, which involves monitoring standard metrics like central processing unit (CPU) utilization, memory usage, and network bandwidth. The more we can understand the state of the system, the better we can improve the performance by recognizing unwanted behavior, improving the stability and reliability of the system. To achieve this, it is essential to build an automated monitoring system that is easy to use and efficient in its working. To do so, we have built a Kubernetes operator that automates the deployment and monitoring of applications and notifies unwanted behavior in real time. It also enables the visualization of the metrics generated by the application and allows standardizing these visualization dashboards for each type of application. Thus, it improves the system's productivity and vastly saves time and resources in deploying monitored applications, upgrading Kubernetes resources for each application deployed, and migration of applications.


2021 ◽  
Vol 7 (3) ◽  
pp. 450
Author(s):  
Krisna Aditama Ashari ◽  
Is Mardianto ◽  
Dedy Sugiarto
Keyword(s):  

Reliabilitas atau keandalan merupakan salah satu sifat penting pada sebuah server dalam melayani pengguna. Salah satu cara mengukurnya ialah dengan melakukan uji perfoma. Penelitian ini bertujuan untuk mengetahui kemampuan RStudio Server pada infrastruktur cloud saat digunakan oleh multiuser dengan Elastic Stack sebagai sistem yang menangani pengumpulan, penyimpanan dan visualisasi data metriknya. Tahapan dimulai dengan mengumpulkan data berupa metrik sistem oleh Metricbeat, lalu diproses Logstash dan disimpan menjadi index dalam Elasticsearch, visualisasi data ditampilkan oleh Kibana. Pengujian kinerja server dilakukan dengan menjalankan script R berdurasi 2 menit dan 7 menit secara simultan. Hasil pengujian berupa catatan CPU Usage, Memory Usage dan durasi penyelesaian script selanjutnya di plotting pada R. Hasil analisa dari plotting data menunjukkan jumlah user yang dapat menggunakan Rstudio Server dengan spesifikasi 2 CPU dan RAM 4GB secara optimal ialah maksimal 2 user pada script dengan run time 2 menit dan 7 menit, lebih dari jumlah user itu akan mempengaruhi waktu proses penyelesaian script menjadi tingkat performa sedang hingga berat.


Author(s):  
Norliza Katuk ◽  
Ikenna Rene Chiadighikaobi

Many previous studies had proven that The PRESENT algorithm is ultra-lightweight encryption. Therefore, it is suitable for use in an IoT environment. However, the main problem with block encryption algorithms like PRESENT is that it causes attackers to break the encryption key. In the context of a fingerprint template, it contains a header and many zero blocks that lead to a pattern and make it easier for attackers to obtain an encryption key. Thus, this research proposed header and zero blocks bypass method during the block pre-processing to overcome this problem. First, the original PRESENT algorithm was enhanced by incorporating the block pre-processing phase. Then, the algorithm’s performance was tested using three measures: time, memory usage, and CPU usage for encrypting and decrypting fingerprint templates. This study demonstrated that the proposed method encrypted and decrypted the fingerprint templates faster with the same CPU usage of the original algorithm but consumed higher memory. Thus, it has the potential to be used in IoT environments for security.


2021 ◽  
Author(s):  
Dan Flomin ◽  
David Pellow ◽  
Ron Shamir

AbstractThe rapid, continuous growth of deep sequencing experiments requires development and improvement of many bioinformatics applications for analysis of large sequencing datasets, including k-mer counting and assembly. Several applications reduce RAM usage by binning sequences. Binning is done by employing minimizer schemes, which rely on a specific order of the minimizers. It has been demonstrated that the choice of the order has a major impact on the performance of the applications. Here we introduce a method for tailoring the order to the dataset. Our method repeatedly samples the dataset and modifies the order so as to flatten the k-mer load distribution across minimizers. We integrated our method into Gerbil, a state-of-the-art memory efficient k-mer counter, and were able to reduce its memory footprint by 50% or more for large k, with only minor increase in runtime. Our tests also showed that the orders produced by our method produced superior results when transferred across datasets from the same species, with little or no order change. This enables memory reduction with essentially no increase in runtime.


MEDIAKITA ◽  
2021 ◽  
Vol 4 (2) ◽  
Author(s):  
Nurul Dwi Lestari

This study aims to discuss aspects in sentence production including the phenomena of various types of silence and errors in the production of sentences (speaking) that occur in the speaker (public speaking), the factors that cause silence and errors in sentence production, the relationship between the phenomena of silence. with the memory usage process, and the things the speaker needs to pay attention to (public speaking) to avoid silences and mistakes in speaking (sentence production). This research uses a qualitative approach with descriptive analysis method. The results showed that in the speech activities carried out by public speaking, there were various forms of silence and tongue twitching caused by certain factors.Keywords: sentence production, silence, tongue flash, speech


2021 ◽  
Author(s):  
Xingjian Gao ◽  
Jiarui Li ◽  
Xinxuan Liu ◽  
Qianqian Peng ◽  
Han Jing ◽  
...  

Here we describe fastQTLmapping, a C++ package that is computationally efficient not only for mQTL-like analysis but as a generic solver also for conducting exhaustive linear regressions involving extraordinarily large numbers of dependent and explanatory variables allowing for covariates. Compared to the state-of-the-art MatrixEQTL, fastQTLmapping was an order of magnitude faster with much lower peak memory usage. In a large dataset consisting of 3,500 individuals, 8 million SNPs, 0.8 million CpGs and 20 covariates, fastQTLmapping completed the mQTL analysis in 7 hours with 230GB peak memory usage.


2021 ◽  
Author(s):  
◽  
Constantine Dymnikov

<p>Object ownership allows us to statically control run-time aliasing in order to provide a strong notion of object encapsulation. Unfortunately in order to use ownership, code must first be annotated with extra type information. This imposes a heavy burden on the programmer, and has contributed to the slow adoption of ownership. Ownership inference is the process of reconstructing ownership type information based on the existing ownership patterns in code. This thesis presents OwnKit—an automatic ownership inference tool for Java. OwnKit conducts inference in a modular way: by only considering a single class at the time. The modularity makes our algorithm highly scalable in both time and memory usage.</p>


Sign in / Sign up

Export Citation Format

Share Document