scholarly journals Optimizing Speedup on Multicore Platform with OpenMP Schedule Clause and Chunk Size

Author(s):  
Z I Abdul Khalib ◽  
H Q Ng ◽  
M Elshaikh ◽  
M N Othman
Keyword(s):  
2018 ◽  
Vol 7 (2.4) ◽  
pp. 46 ◽  
Author(s):  
Shubhanshi Singhal ◽  
Akanksha Kaushik ◽  
Pooja Sharma

Due to drastic growth of digital data, data deduplication has become a standard component of modern backup systems. It reduces data redundancy, saves storage space, and simplifies the management of data chunks. This process is performed in three steps: chunking, fingerprinting, and indexing of fingerprints. In chunking, data files are divided into the chunks and the chunk boundary is decided by the value of the divisor. For each chunk, a unique identifying value is computed using a hash signature (i.e. MD-5, SHA-1, SHA-256), known as fingerprint. At last, these fingerprints are stored in the index to detect redundant chunks means chunks having the same fingerprint values. In chunking, the chunk size is an important factor that should be optimal for better performance of deduplication system. Genetic algorithm (GA) is gaining much popularity and can be applied to find the best value of the divisor. Secondly, indexing also enhances the performance of the system by reducing the search time. Binary search tree (BST) based indexing has the time complexity of  which is minimum among the searching algorithm. A new model is proposed by associating GA to find the value of the divisor. It is the first attempt when GA is applied in the field of data deduplication. The second improvement in the proposed system is that BST index tree is applied to index the fingerprints. The performance of the proposed system is evaluated on VMDK, Linux, and Quanto datasets and a good improvement is achieved in deduplication ratio.


2020 ◽  
Vol 19 (7) ◽  
pp. 1715-1730
Author(s):  
Tong Zhang ◽  
Fengyuan Ren ◽  
Wenxue Cheng ◽  
Xiaohui Luo ◽  
Ran Shu ◽  
...  

PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0255856
Author(s):  
Guocheng Bao ◽  
Gongpu Wang ◽  
Bing Wang ◽  
Lianglong Hu ◽  
Xiaowei Xu ◽  
...  

Collision of falling in the mechanical harvesting process of sweet potato is one of the main causes of epidermal destruction and damage to sweet potato tubers. Therefore, a sweet potato mechanical characteristic test and a full-factor sweet potato drop test were designed. Based on the analysis of the fitting mathematical model, the impact of the drop height, collision material and sweet potato chunk size on the damage of the sweet potato were studied. The mathematical models were established by fitting analysis of the IBM SPSS Statistics 22 software between the drop height and the sweet potato chunk size with each test index (impact force, impact stress, broken skin area and damaged area). The critical epidermal destruction height and the critical damage height of a certain size of sweet potato when it collides with a collision material can be calculated by the mathematical model, and the critical epidermal destruction mass and critical damage mass of sweet potato when it falls from a certain height and collides with a collision material can also be calculated. Then a series of critical values (including critical epidermal destruction force value, critical epidermal destruction impact stress, critical damage force value, critical damage impact stress) of mechanical properties of sweet potato were obtained. The results show that the impact deformation of sweet potato includes both elastic and plastic ones, and has similar stress relaxation characteristics. The critical damage impact stress of sweet potato is that the average value of the impact stress on the contact surface is less than it’s Firmness. The results provided a theoretical basis for understanding the collision damage mechanism of sweet potato and how to reduce the damage during harvest.


2009 ◽  
Author(s):  
Amanda L. Gilchrist ◽  
Nelson Cowan ◽  
Moshe Naveh-Benjamin

2018 ◽  
Vol 20 (5) ◽  
pp. 1058-1070
Author(s):  
Haicheng Liu ◽  
Peter van Oosterom ◽  
Theo Tijssen ◽  
Tom Commandeur ◽  
Wen Wang

Abstract Management of large hydrologic datasets including storage, structuring, clustering, indexing, and query is one of the crucial challenges in the era of big data. This research originates from a specific problem: time series extraction at specific locations takes a long time when a large multidimensional (MD) dataset is stored in the NetCDF classic or the 64-bit offset format. The essence of this issue lies in the contiguous storage structure adopted by NetCDF. In this research, NetCDF file-based solutions and a MD array database management system applying a chunked storage structure are benchmarked to determine the best solution for storing and querying large MD hydrologic datasets. Expert consultancy was conducted to establish benchmark sets, with the HydroNET-4 system being utilized to provide the benchmark environment. In the final benchmark tests, the effect of data storage configurations, elaborating chunk size, dimension order (spatio-temporal clustering) and compression on the query performance, is explored. Results indicate that for big hydrologic MD data management, the properly chunked NetCDF-4 solution without compression is, in general, more efficient than the SciDB DBMS. However, benefits of a DBMS should not be neglected, for example, the integration with other data types, smart caching strategies, transaction support, scalability, and out-of-the-box support for parallelization.


Sign in / Sign up

Export Citation Format

Share Document