chunk size
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 8)

H-INDEX

5
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Dror Dotan ◽  
Nadin Brutman

Representing the base-10 structure of numbers is a challenging cognitively ability, unique to humans, but it is yet unknown how precisely this is done. Here, we examined whether and how literate adults represent a number’s full syntactic structure. In 5 experiments, the participants repeated sequences of 6-7 number words, and we systematically varied the order of words within the sequence. Repetition was more accurate when the sequence was grammatical (e.g., ninety-seven) than when it was not (seven-ninety). The performance monotonously improved for sequences with increasingly longer grammatical segments, up to a limit of ~4 words per segment, irrespectively of the number of digits, and worsened thereafter. We conclude that at least for numbers up to 6 digits long, participants represented the number’s full syntactic structure and used it to merge number words into chunks in short-term memory. Short chunks improved memorization, but oversized chunks disrupted memorization. The existence of a chunk size limit suggests that the chunks are not memorized templates, whose size limit is not expected to be so low. Rather, they are created ad-hoc by a generative process, such as the hierarchical syntactic representation hypothesized in Michael McCloskey’s number-processing model. Chunking occurred even when it disrupted performance, and even when external cues for chunking were controlled for or were removed; we conclude that the above generative process operates automatically rather than voluntarily.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0255856
Author(s):  
Guocheng Bao ◽  
Gongpu Wang ◽  
Bing Wang ◽  
Lianglong Hu ◽  
Xiaowei Xu ◽  
...  

Collision of falling in the mechanical harvesting process of sweet potato is one of the main causes of epidermal destruction and damage to sweet potato tubers. Therefore, a sweet potato mechanical characteristic test and a full-factor sweet potato drop test were designed. Based on the analysis of the fitting mathematical model, the impact of the drop height, collision material and sweet potato chunk size on the damage of the sweet potato were studied. The mathematical models were established by fitting analysis of the IBM SPSS Statistics 22 software between the drop height and the sweet potato chunk size with each test index (impact force, impact stress, broken skin area and damaged area). The critical epidermal destruction height and the critical damage height of a certain size of sweet potato when it collides with a collision material can be calculated by the mathematical model, and the critical epidermal destruction mass and critical damage mass of sweet potato when it falls from a certain height and collides with a collision material can also be calculated. Then a series of critical values (including critical epidermal destruction force value, critical epidermal destruction impact stress, critical damage force value, critical damage impact stress) of mechanical properties of sweet potato were obtained. The results show that the impact deformation of sweet potato includes both elastic and plastic ones, and has similar stress relaxation characteristics. The critical damage impact stress of sweet potato is that the average value of the impact stress on the contact surface is less than it’s Firmness. The results provided a theoretical basis for understanding the collision damage mechanism of sweet potato and how to reduce the damage during harvest.


Author(s):  
Vijesh Joe ◽  
Jennifer S. Raj ◽  
Smys S.

In the big data era, there is a high requirement for data storage and processing. The conventional approach faces a great challenge, and de-duplication is an excellent approach to reduce the storage space and computational time. Many existing approaches take much time to pinpoint the similar data. MapReduce de-duplication system is proposed to attain high duplication ratio. MapReduce is the parallel processing approach that helps to process large number of files in less time. The proposed system uses two threshold two divisor with switch algorithm for chunking. Switch is the average parameter used by TTTD-S to minimize the chunk size variance. Hashing using SHA-3 and fractal tree indexing is used here. In fractal index tree, read and write takes place at the same time. Data size after de-duplication, de-duplication ratio, throughput, hash time, chunk time, and de-duplication time are the parameters used. The performance of the system is tested by college scorecard and ZCTA dataset. The experimental results show that the proposed system can lessen the duplicity and processing time.


2020 ◽  
Vol 19 (7) ◽  
pp. 1715-1730
Author(s):  
Tong Zhang ◽  
Fengyuan Ren ◽  
Wenxue Cheng ◽  
Xiaohui Luo ◽  
Ran Shu ◽  
...  

Author(s):  
Z I Abdul Khalib ◽  
H Q Ng ◽  
M Elshaikh ◽  
M N Othman
Keyword(s):  

Symmetry ◽  
2019 ◽  
Vol 11 (6) ◽  
pp. 801
Author(s):  
Xinran Zhou ◽  
Xiaoyan Kui

The online sequential extreme learning machine with persistent regularization and forgetting factor (OSELM-PRFF) can avoid potential singularities or ill-posed problems of online sequential regularized extreme learning machines with forgetting factors (FR-OSELM), and is particularly suitable for modelling in non-stationary environments. However, existing algorithms for OSELM-PRFF are time-consuming or unstable in certain paradigms or parameters setups. This paper presents a novel algorithm for OSELM-PRFF, named “Cholesky factorization based” OSELM-PRFF (CF-OSELM-PRFF), which recurrently constructs an equation for extreme learning machine and efficiently solves the equation via Cholesky factorization during every cycle. CF-OSELM-PRFF deals with timeliness of samples by forgetting factor, and the regularization term in its cost function works persistently. CF-OSELM-PRFF can learn data one-by-one or chunk-by-chunk with a fixed or varying chunk size. Detailed performance comparisons between CF-OSELM-PRFF and relevant approaches are carried out on several regression problems. The numerical simulation results show that CF-OSELM-PRFF demonstrates higher computational efficiency than its counterparts, and can yield stable predictions.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3225
Author(s):  
Rithea Ngeth ◽  
Brian Kurkoski ◽  
Yuto Lim ◽  
Yasuo Tan

This paper investigates the design of overlapped chunked codes (OCC) for multi-source multi-relay networks where a physical-layer network coding approach, compute-and-forward (CF) based on nested lattice codes (NLC), is applied for the simultaneous transmissions from the sources to the relays. This code is called OCC/CF. In this paper, OCC is applied before NLC before transmitting for each source. Random linear network coding is applied within each chunk. A decodability condition to design OCC/CF is provided. In addition, an OCC with a contiguously overlapping, but non-rounded-end fashion is employed for the design, which is done by using the probability distributions of the number of innovative codeword combinations and the probability distribution of the participation factor of each source to the codeword combinations received for a chunk transmission. An estimation is done to select an allocation, i.e., the number of innovative blocks per chunk and the number of blocks taken from the previous chunk for all sources, that is expected to provide the desired performance. From the numerical results, the design overhead of OCC/CF is low when the probability distribution of the participation factor of each source is dense at the chunk size for each source.


Author(s):  
Toan Ong ◽  
Ibrahim Lazrig ◽  
Indrajit Ray ◽  
Indrakshi Ray ◽  
Michael Kahn

IntroductionBloom Filters (BFs) are a scalable solution for probabilistic privacy-preserving record linkage but BFs can be compromised. Yao’s garbled circuits (GCs) can perform secure multi-party computation to compute the similarity of two BFs without a trusted third party. The major drawback of using BFs and GCs together is poor efficiency. Objectives and ApproachWe evaluated the feasibility of BFs+GCs using high capacity compute engines and implementing a novel parallel processing framework in Google Cloud Compute Engines (GCCE). In the Yao’s two-party secure computation protocol, one party serves as the generator and the other party serves as the evaluator. To link data in parallel, records from both parties are divided into chunks. Linkage between every two chunks in the same block is processed by a thread. The number of threads for linkage depends on available computing resources. We tested the parallelized process in various scenarios with variations in hardware and software configurations. ResultsTwo synthetic datasets with 10K records were linked using BFs+GCs on 12 different software and hardware configurations which varied by: number of CPU cores (4 to 32), memory size (15GB – 28.8GB), number of threads (6-41), and chunk size (50-200 records). The minimum configuration (4 cores; 15GB memory) took 8,062.4s to complete whereas the maximum configuration (32 cores; 28.8GB memory) took 1,454.1s. Increasing the number of threads or changing the chunk size without providing more CPU cores and memory did not improve the efficiency. Efficiency is improved on average by 39.81% when the number of cores and memory on the both sides are doubled. The CPU utilization is maximized (near 100% on both sides) when the computing power of the generator is double the evaluator. Conclusion/ImplicationsThe PPRL runtime of BFs+GCs was greatly improved using parallel processing in a cloud-based infrastructure. A cluster of GCCEs could be leveraged to reduce the runtime of data linkage operations even further. Scalable cloud-based infrastructures can overcome the trade-off between security and efficiency, allowing computationally complex methods to be implemented.


Sign in / Sign up

Export Citation Format

Share Document