processing rate
Recently Published Documents


TOTAL DOCUMENTS

96
(FIVE YEARS 13)

H-INDEX

17
(FIVE YEARS 1)

2021 ◽  
Vol 17 (4) ◽  
pp. 1-27
Author(s):  
Xiaojia Song ◽  
Tao Xie ◽  
Stephen Fischer

Existing near-data processing (NDP)-powered architectures have demonstrated their strength for some data-intensive applications. Data center servers, however, have to serve not only data-intensive but also compute-intensive applications. An in-depth understanding of the impact of NDP on various data center applications is still needed. For example, can a compute-intensive application also benefit from NDP? In addition, current NDP techniques focus on maximizing the data processing rate by always utilizing all computing resources at all times. Is this “always running in full gear” strategy consistently beneficial for an application? To answer these questions, we first propose two reconfigurable NDP-powered servers called RANS ( R econfigurable A RM-based N DP S erver) and RFNS ( R econfigurable F PGA-based N DP S erver). Next, we implement a single-engine prototype for each of them based on a conventional data center and then evaluate their effectiveness. Experimental results measured from the two prototypes are then extrapolated to estimate the properties of the two full-size reconfigurable NDP servers. Finally, several new findings are presented. For example, we find that while RANS can only benefit data-intensive applications, RFNS can offer benefits for both data-intensive and compute-intensive applications. Moreover, we find that for certain applications the reconfigurability of RANS/RFNS can deliver noticeable energy efficiency without any performance degradation.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 228
Author(s):  
Sze-Ying Lam ◽  
Alexandre Zénon

Previous investigations concluded that the human brain’s information processing rate remains fundamentally constant, irrespective of task demands. However, their conclusion rested in analyses of simple discrete-choice tasks. The present contribution recasts the question of human information rate within the context of visuomotor tasks, which provides a more ecologically relevant arena, albeit a more complex one. We argue that, while predictable aspects of inputs can be encoded virtually free of charge, real-time information transfer should be identified with the processing of surprises. We formalise this intuition by deriving from first principles a decomposition of the total information shared by inputs and outputs into a feedforward, predictive component and a feedback, error-correcting component. We find that the information measured by the feedback component, a proxy for the brain’s information processing rate, scales with the difficulty of the task at hand, in agreement with cost-benefit models of cognitive effort.


2021 ◽  
pp. 1-1
Author(s):  
Zhenyu Tian ◽  
Xiaowei Cui ◽  
Gang Liu ◽  
Yonghui Zhu ◽  
Mingquan Lu

2020 ◽  
Author(s):  
Sze-Ying Lam ◽  
Alexandre Zénon

AbstractWhile previous studies of human information rate focused primarily on discrete forced-choice tasks, we extend the scope of the investigation to the framework of sensorimotor tracking of continuous signals. We show how considering information transfer in this context sheds new light on the problem; crucially, such an analysis requires one to consider and carefully disentangle the effects due to real-time information processing of surprising inputs (feedback component) from the contribution to performance due to prediction (feedforward component). We argue that only the former constitutes a faithful representation of the true information processing rate. We provide information-theoretic measures which separately quantify these components and show that they correspond to a decomposition of the total information shared between target and tracking signals. We employ a linear quadratic regulator model to provide evidence for the validity of the measures, as well as of the estimator of visual-motor delay (VMD) from experimental data, instrumental to compute them in practice. On experimental tracking data, we show that the contribution of prediction as computed by the feedforward measure increases with the predictability of the signal, confirming previous findings. Importantly, we further find the feedback component to be modulated by task difficulty, with higher information transmission rates observed with noisier signals. Such opposite trends between feedback and feedforward point to a tradeoff of cognitive resources/effort and performance gain.Author summaryPrevious investigations concluded that the human brain’s information processing rate remains fundamentally constant, irrespective of task demands. However, their conclusion rested in analyses of simple discrete-choice tasks. The present contribution recasts the question of human information rate within the context of visuomotor tasks, which provides a more ecologically relevant arena, albeit a more complex one. We argue that, while predictable aspects of inputs can be encoded virtually free of charge, real-time information transfer should be identified with the processing of surprises. We formalise this intuition by deriving from first principles a decomposition of the total information shared by inputs and outputs into a feedforward, predictive component and a feedback, error-correcting component. We find that the information measured by the feedback component, a proxy for the brain’s information processing rate, scales with the difficulty of the task at hand, in agreement with cost-benefit models of cognitive effort.


2020 ◽  
Author(s):  
Carla Mavian ◽  
Roxana M Coman ◽  
Ben M Dunn ◽  
Maureen M Goodenow

AbstractSubtype C and A HIV-1 strains dominate the epidemic in Africa and Asia, while sub-subtype A2 is found at low frequency only in West Africa. To relate Gag processing in vitro with viral fitness, viral protease (PR) enzymatic activity and in vitro Gag processing were evaluated. The rate of sub-subtype A2 Gag polyprotein processing, as production of the p24 protein, was reduced compared to subtype B or C independent of PR subtype, indicating that subtype A2 Gag qualitatively differed from other subtypes. Introduction of subtype B matrix-capsid cleavage site in sub-subtype A2 Gag only partially restored the processing rate. Unique amino acid polymorphism V124S at the matrix-capsid cleavage site, together with other polymorphisms at non-cleavage sites, are differentially influencing the processing of Gag polyproteins. This genetic polymorphisms landscape defining HIV-1 sub-subtypes, subtypes and recombinant forms are determinants of viral fitness and frequency in the HIV-1 infected population.Graphical AbstractHighlightsThe polymorphism at matrix-capsid cleavage site, together with non-cleavage sites polymorphisms, direct the processing rate of the substrate, not the intrinsic activity of the enzyme.The less prevalent and less infectious sub-subtype A2 harbors the matrix-capsid cleavage site polymorphism that we report as a limiting factor for gag processing.Sub-subtype A2 Gag polyprotein processing rate is independent of the PR subtype.


2020 ◽  
Vol 7 (4) ◽  
pp. 496-512 ◽  
Author(s):  
Hidenori Shimada ◽  
Shunichi Kato ◽  
Takumi Watanabe ◽  
Masaki Yamaguchi

AbstractHierarchical structures are promising geometries for superhydrophobic surfaces, however a processing method with a single laser light source that is capable of both one-pass and rapid processing has not been established. The purpose of this study was to propose a concept of direct laser processing of two-scale periodic structures exhibiting superhydrophobicity. We hypothesized that the molten material that occurs due to the expanding plasma and that is squeezed around the micro-holes could play an active role in the processing of two-scale periodic structures. Percussion drilling using a nanosecond pulsed laser (532 nm wavelength) was performed on a steel surface. Twenty four different test-pieces were prepared using pitch (16–120 μm), number of repetition shots (1–120), and fluence (2.49–20 J/cm2), as the parameters. As the results, micro-holes with bank-shaped outer rims were formed. The maximum apparent contact angle was 161.4° and the contact angle hysteresis was 4.2° for a pitch of 80 μm and 20 repetition shots. The calculated results for the apparent contact angles were consistent with the measured results. Finally, an equation for estimating the processing rate was proposed. We demonstrated that this direct processing method can achieve a maximum processing rate of 823 mm2/min.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1765
Author(s):  
Elim Yi Lam Kwan ◽  
Jose Nunez-Yanez

Binarized neural networks are well suited for FPGA accelerators since their fine-grained architecture allows the creation of custom operators to support low-precision arithmetic operations, and the reduction in memory requirements means that all the network parameters can be stored in internal memory. Although good progress has been made to improve the accuracy of binarized networks, it can be significantly lower than networks where weights and activations have multi-bit precision. In this paper, we address this issue by adaptively choosing the number of frames used during inference, exploiting the high frame rates that binarized neural networks can achieve. We present a novel entropy-based adaptive filtering technique that improves accuracy by varying the system’s processing rate based on the entropy present in the neural network output. We focus on using real data captured with a standard camera rather than using standard datasets that do not realistically represent the artifacts in video stream content. The overall design has been prototyped on the Avnet Zedboard, which achieved 70.4% accuracy with a full processing pipeline from video capture to final classification output, which is 1.9 times better compared to the base static frame rate system. The main feature of the system is that while the classification rate averages a constant 30 fps, the real processing rate is dynamic and varies between 30 and 142 fps, adapting to the complexity of the data. The dynamic processing rate results in better efficiency that simply working at full frame rate while delivering high accuracy.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Wenbao Hou ◽  
Guojun Tan ◽  
Delu Li

Model predictive control (MPC) method has been widely used to reduce the computational complexity of the traditional space vector pulse width modulation (SVPWM). However, for a neutral-point clamped three-level Z-source converter, the performance of the normal MPC strategy would highly depend on the computation processing rate because of the multiple times optimization calculation. In this paper, an improved MPC strategy has been developed, with a voltage prediction being designed to replace the current prediction, the calculation of the roll optimization could be effectively simplified significantly, and then the digital execution efficiency would be improved. Besides, in order to obtain a fixed output harmonic frequency, a combination of this improved MPC and SVPWM has been studied and the shoot-through state insertion for the Z-source also has been analyzed in detail. Lastly, comparison experiments have been carried out to make verification of this improved modulation mechanism.


Sign in / Sign up

Export Citation Format

Share Document