Optimizing Memory Space by Removing Duplicate Files Using Similarity Digest Technique

Author(s):  
Vedant Sharma ◽  
Priyamwada Sharma ◽  
Santosh Sahu
Keyword(s):  
2020 ◽  
Author(s):  
Zahra Zandesh

BACKGROUND The complicated nature of cloud computing encompassing internet-based technologies and service models for delivering IT applications, processing capability, storage, and memory space and some notable features motivate organizations to migrate their core businesses to the cloud. Consequently, healthcare organizations are much interested to migrate to this new paradigm despite challenges about security, privacy and compliances issues. OBJECTIVE The present study was conducted to investigate all related cloud compliances in health domain in order to find gaps in this context. METHODS All works on cloud compliance issues were surveyed after 2013 in health domain in PubMed, Scopus, Web of Science, and IEEE Digital Library databases. RESULTS Totally, 36 compliances had been found in this domain used in different countries for a variety of purposes. Initially, all founded compliances were divided into three parts as well as five standards, twenty-eight legislations and three policies and guidelines each of which is presented here by in detail. CONCLUSIONS Then, some main headlines like compliance management, data management, data governance, information security services, medical ethics, and patients' rights were recommended in terms of any compliance or frameworks and their corresponding patterns which should be involved in this domain.


1995 ◽  
Vol 05 (02) ◽  
pp. 239-259
Author(s):  
SU HWAN KIM ◽  
SEON WOOK KIM ◽  
TAE WON RHEE

For data analyses, it is very important to combine data with similar attribute values into a categorically homogeneous subset, called a cluster, and this technique is called clustering. Generally crisp clustering algorithms are weak in noise, because each datum should be assigned to exactly one cluster. In order to solve the problem, a fuzzy c-means, a fuzzy maximum likelihood estimation, and an optimal fuzzy clustering algorithms in the fuzzy set theory have been proposed. They, however, require a lot of processing time because of exhaustive iteration with an amount of data and their memberships. Especially large memory space results in the degradation of performance in real-time processing applications, because it takes too much time to swap between the main memory and the secondary memory. To overcome these limitations, an extended fuzzy clustering algorithm based on an unsupervised optimal fuzzy clustering algorithm is proposed in this paper. This algorithm assigns a weight factor to each distinct datum considering its occurrence rate. Also, the proposed extended fuzzy clustering algorithm considers the degree of importances of each attribute, which determines the characteristics of the data. The worst case is that the whole data has an uniformly normal distribution, which means the importance of all attributes are the same. The proposed extended fuzzy clustering algorithm has better performance than the unsupervised optimal fuzzy clustering algorithm in terms of memory space and execution time in most cases. For simulation the proposed algorithm is applied to color image segmentation. Also automatic target detection and multipeak detection are considered as applications. These schemes can be applied to any other fuzzy clustering algorithms.


2013 ◽  
Vol 380-384 ◽  
pp. 1969-1972
Author(s):  
Bo Yuan ◽  
Jin Dou Fan ◽  
Bin Liu

Traditional network processors (NPs) adopt either local memory mechanism or cache mechanism as the hierarchical memory structure. The local memory mechanism usually has small on-chip memory space which is not fit for the various complicated applications. The cache mechanism is better at dealing with the temporary data which need to be read and written frequently. But in deep packet processing, cache miss occurs when reading each segment of packet. We propose a cooperative mechanism of local memory and cache. In which the packet data and temporary data are stored into local memory and cache respectively. The analysis and experimental evaluation shows that the cooperative mechanism can improve the performance of network processors and reduce processing latency with little extra resources cost.


2010 ◽  
Vol 18 (3) ◽  
pp. 451-489 ◽  
Author(s):  
Tatsuya Motoki

As practitioners we are interested in the likelihood of the population containing a copy of the optimum. The dynamic systems approach, however, does not help us to calculate that quantity. Markov chain analysis can be used in principle to calculate the quantity. However, since the associated transition matrices are enormous even for modest problems, it follows that in practice these calculations are usually computationally infeasible. Therefore, some improvements on this situation are desirable. In this paper, we present a method for modeling the behavior of finite population evolutionary algorithms (EAs), and show that if the population size is greater than 1 and much less than the cardinality of the search space, the resulting exact model requires considerably less memory space for theoretically running the stochastic search process of the original EA than the Nix and Vose-style Markov chain model. We also present some approximate models that use still less memory space than the exact model. Furthermore, based on our models, we examine the selection pressure by fitness-proportionate selection, and observe that on average over all population trajectories, there is no such strong bias toward selecting the higher fitness individuals as the fitness landscape suggests.


2012 ◽  
Vol 20 (2) ◽  
pp. 89-114 ◽  
Author(s):  
H. Carter Edwards ◽  
Daniel Sunderland ◽  
Vicki Porter ◽  
Chris Amsler ◽  
Sam Mish

Large, complex scientific and engineering application code have a significant investment in computational kernels to implement their mathematical models. Porting these computational kernels to the collection of modern manycore accelerator devices is a major challenge in that these devices have diverse programming models, application programming interfaces (APIs), and performance requirements. The Kokkos Array programming model provides library-based approach to implement computational kernels that are performance-portable to CPU-multicore and GPGPU accelerator devices. This programming model is based upon three fundamental concepts: (1) manycore compute devices each with its own memory space, (2) data parallel kernels and (3) multidimensional arrays. Kernel execution performance is, especially for NVIDIA® devices, extremely dependent on data access patterns. Optimal data access pattern can be different for different manycore devices – potentially leading to different implementations of computational kernels specialized for different devices. The Kokkos Array programming model supports performance-portable kernels by (1) separating data access patterns from computational kernels through a multidimensional array API and (2) introduce device-specific data access mappings when a kernel is compiled. An implementation of Kokkos Array is available through Trilinos [Trilinos website, http://trilinos.sandia.gov/, August 2011].


Sign in / Sign up

Export Citation Format

Share Document