memory utilization
Recently Published Documents


TOTAL DOCUMENTS

110
(FIVE YEARS 32)

H-INDEX

7
(FIVE YEARS 2)

Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8017
Author(s):  
Nurfazrina M. Zamry ◽  
Anazida Zainal ◽  
Murad A. Rassam ◽  
Eman H. Alkhammash ◽  
Fuad A. Ghaleb ◽  
...  

Wireless Sensors Networks have been the focus of significant attention from research and development due to their applications of collecting data from various fields such as smart cities, power grids, transportation systems, medical sectors, military, and rural areas. Accurate and reliable measurements for insightful data analysis and decision-making are the ultimate goals of sensor networks for critical domains. However, the raw data collected by WSNs usually are not reliable and inaccurate due to the imperfect nature of WSNs. Identifying misbehaviours or anomalies in the network is important for providing reliable and secure functioning of the network. However, due to resource constraints, a lightweight detection scheme is a major design challenge in sensor networks. This paper aims at designing and developing a lightweight anomaly detection scheme to improve efficiency in terms of reducing the computational complexity and communication and improving memory utilization overhead while maintaining high accuracy. To achieve this aim, one-class learning and dimension reduction concepts were used in the design. The One-Class Support Vector Machine (OCSVM) with hyper-ellipsoid variance was used for anomaly detection due to its advantage in classifying unlabelled and multivariate data. Various One-Class Support Vector Machine formulations have been investigated and Centred-Ellipsoid has been adopted in this study due to its effectiveness. Centred-Ellipsoid is the most effective kernel among studies formulations. To decrease the computational complexity and improve memory utilization, the dimensions of the data were reduced using the Candid Covariance-Free Incremental Principal Component Analysis (CCIPCA) algorithm. Extensive experiments were conducted to evaluate the proposed lightweight anomaly detection scheme. Results in terms of detection accuracy, memory utilization, computational complexity, and communication overhead show that the proposed scheme is effective and efficient compared few existing schemes evaluated. The proposed anomaly detection scheme achieved the accuracy higher than 98%, with (𝑛𝑑) memory utilization and no communication overhead.


Author(s):  
Gururaj T. ◽  
Siddesh G. M.

In gene expression analysis, the expression levels of thousands of genes are analyzed, such as separate stages of treatments or diseases. Identifying particular gene sequence pattern is a challenging task with respect to performance issues. The proposed solution addresses the performance issues in genomic stream matching by involving assembly and sequencing. Counting the k-mer based on k-input value and while performing DNA sequencing tasks, the researches need to concentrate on sequence matching. The proposed solution addresses performance issue metrics such as processing time for k-mer counting, number of operations for matching similarity, memory utilization while performing similarity search, and processing time for stream matching. By suggesting an improved algorithm, Revised Rabin Karp(RRK) for basic operation and also to achieve more efficiency, the proposed solution suggests a novel framework based on Hadoop MapReduce blended with Pig & Apache Tez. The measure of memory utilization and processing time proposed model proves its efficiency when compared to existing approaches.


In gene expression analysis, the expression levels of thousands of genes are analyzed, such as separate stages of treatments or diseases. Identifying particular gene sequence pattern is a challenging task with respect to performance issues. The proposed solution addresses the performance issues in genomic stream matching by involving assembly and sequencing. Counting the k-mer based on k-input value and while performing DNA sequencing tasks, the researches need to concentrate on sequence matching. The proposed solution addresses performance issue metrics such as processing time for k-mer counting, number of operations for matching similarity, memory utilization while performing similarity search, and processing time for stream matching. By suggesting an improved algorithm, Revised Rabin Karp(RRK) for basic operation and also to achieve more efficiency, the proposed solution suggests a novel framework based on Hadoop MapReduce blended with Pig & Apache Tez. The measure of memory utilization and processing time proposed model proves its efficiency when compared to existing approaches.


Author(s):  
Adnan Adel Bitar ◽  
Dr. V. Sujatha

Encryption is the effective way to provide the needs of security and privacy. The performance of any encryption algorithm is one of the most important parameters if not the most one in encrypting data. Also, random access memory utilization plays an excellent role in ciphering plain texts. Speaking of standard simple encryption algorithms that most of them are weak and may not be safe to use them to cipher our data communication from any threat such as Caesar algorithm [2].However, they have good results in the parameters. Nevertheless, merging these simple algorithms could generate a more powerful algorithm that takes a very long time and excessive efforts to break it. Both standard algorithms “Vernam Cipher” and “Rail-Fence stream” merged to produce the “Railve” algorithm, which has great performance and consumes a small amount of RAM. The three algorithms are applied on two devices “a mobile phone and a laptop”


Author(s):  
Yuqian Guan ◽  
Jian Guo

Embedded applications are becoming more complex and are required to utilize computing platform resources more efficiently. Existing dynamic memory allocation (DSA) schemes cannot adaptively perform memory management according to the environment in which they are located or integrate various memory allocation strategies, making it impossible to guarantee a constant execution time. Efficient memory utilization is a crucial challenge for developers, especially in embedded OSs (operating systems). In this paper, we propose an adaptive layered segregated fit (ALSF) scheme for DSA. The ALSF scheme combines dynamic two-dimensional arrays and bitmaps, completes the allocation and freeing of memory blocks within constant execution time, and uses memory splitting technology to reduce internal fragmentation. The proposed scheme also adjusts the number of segregated lists by analyzing the system’s allocation of different memory sizes, which improves the matching accuracy of memory blocks. We conducted a comparative experimental analysis and investigation of the ALSF and two-level segregated fit (TLSF) schemes in the Zephyr OS. Experiments show that the average memory utilization of the proposed ALSF scheme reaches 94.95%. Compared with the TLSF scheme, our scheme has a 12.99% higher allocation success rate in the memory-scarce environment, and the execution speeds of the two are similar.


Author(s):  
Sa'ed Abed ◽  
Lamis Waleed ◽  
Ghadeer Aldamkhi ◽  
Khaled Hadi

Data <span>encryption process and key generation techniques protect sensitive data against any various attacks. This paper focuses on generating secured cipher keys to raise the level of security and the speed of the data integrity checking by using the MinHash function. The methodology is based on applying the cryptographic algorithms rivest-shamir-adleman (RSA) and advanced encryption standard (AES) to generate the cipher keys. These keys are used in the encryption/decryption process by utilizing the Pearson Hash and the MinHash techniques. The data is divided into shingles that are used in the Hash function to generate integers and in the MinHash function to generate the public and the private keys. MinHash technique is used to check the data integrity by comparing the sender’s and the receiver’s encrypted digest. The experimental results show that the RSA and AES algorithms based on the MinHash function have less encryption time compared to the normal hash functions by 17.35% and 43.93%, respectively. The data integrity between two large sets is improved by 100% against the original algorithm in terms of completion time, and 77% for small/medium data and 100% for large set data in terms of memory utilization.</span>


Author(s):  
Reinaldo Padilha França ◽  
Yuzo Iano ◽  
Ana Carolina Borges Monteiro ◽  
Rangel Arthur

Most of the decisions taken in and around the world are based on data and information. Therefore, the chapter aims to develop a method of data transmission based on discrete event concepts, being such methodology named CBEDE, and using the MATLAB software, where the memory consumption of the proposal was evaluated, presenting great potential to intermediate users and computer systems, within an environment and scenario with cyber-physical systems ensuring more speed, transmission fluency, in the same way as low memory consumption, resulting in reliability. With the differential of this research, the results show better computational performance related to memory utilization with respect to the compression of the information, showing an improvement reaching 95.86%.


2021 ◽  
Vol 37 ◽  
pp. 01021
Author(s):  
A V Shreyas Madhav ◽  
Siddarth Singaravel ◽  
A Karmel

Compiler optimization techniques allow developers to achieve peak performance with low-cost hardware and are of prime importance in the field of efficient computing strategies. The realm of compiler suites that possess and apply efficient optimization methods provide a wide array of beneficial attributes that help programs execute efficiently with low execution time and minimal memory utilization. Different compilers provide a certain degree of optimization possibilities and applying the appropriate optimization strategies to complex programs can have a significant impact on the overall performance of the system. This paper discusses methods of compiler optimization and covers significant advances in compiler optimization techniques that have been established over the years. This article aims to provide an overall survey of the cache optimization methods, multi memory allocation features and explore the scope of machine learning in compiler optimization to attain a sustainable computing experience for the developer and user.


2020 ◽  
Vol 32 (12) ◽  
pp. 2320-2332
Author(s):  
Wen Jin ◽  
Anna C. Nobre ◽  
Freek van Ede

Working memory enables us to retain past sensations in service of anticipated task demands. How we prepare for anticipated task demands during working memory retention remains poorly understood. Here, we focused on the role of time—asking how temporal expectations help prepare for ensuing memory-guided behavior. We manipulated the expected probe time in a delayed change-detection task and report that temporal expectation can have a profound influence on memory-guided behavioral performance. EEG measurements corroborated the utilization of temporal expectations: demonstrating the involvement of a classic EEG signature of temporal expectation—the contingent negative variation—in the context of working memory. We also report the influence of temporal expectations on 2 EEG signatures associated with visual working memory—the lateralization of 8- to 12-Hz alpha activity, and the contralateral delay activity. We observed a dissociation between these signatures, whereby alpha lateralization (but not the contralateral delay activity) adapted to the time of expected memory utilization. These data show how temporal expectations prepare visual working memory for behavior and shed new light on the electrophysiological markers of both temporal expectation and working memory.


Sign in / Sign up

Export Citation Format

Share Document