scholarly journals Area-Throughput Trade-Offs for SHA-1 and SHA-256 Hash Functions’ Pipelined Designs

2016 ◽  
Vol 25 (04) ◽  
pp. 1650032 ◽  
Author(s):  
Harris E. Michail ◽  
George S. Athanasiou ◽  
Vasileios I. Kelefouras ◽  
George Theodoridis ◽  
Thanos Stouraitis ◽  
...  

High-throughput designs of hash functions are strongly demanded due to the need for security in every transmitted packet of worldwide e-transactions. Thus, optimized and non-optimized pipelined architectures have been proposed raising, however, important questions. Which is the optimum number of the pipeline stages? Is it worth to develop optimized designs or could the same results be achieved by increasing only the pipeline stages of the non-optimized designs? The paper answers the above questions studying extensively many pipelined architectures of SHA-1 and SHA-256 hashes, implemented in FPGAs, in terms of throughput/area (T/A) factor. Also, guides for developing efficient security schemes designs are provided.

2021 ◽  
Author(s):  
Noel S. Ha ◽  
Markus de Raad ◽  
La Zhen Han ◽  
Amber Golini ◽  
Christopher J Petzold ◽  
...  

High-throughput screening technologies are widely used for elucidating biological activities. These typically require trade-offs in assay specificity and sensitivity to achieve higher throughput. Microfluidic approaches enable rapid manipulation of small...


Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1599
Author(s):  
Alexander Marinšek ◽  
Daan Delabie ◽  
Lieven De Strycker ◽  
Liesbet Van der Perre

Emerging applications in fields such as extended reality require both a high throughput and low latency. The millimeter-wave (mmWave) spectrum is considered because of the potential in the large available bandwidth. The present work studies mmWave Wi-Fi physical layer latency management mechanisms, a key factor in providing low-latency communications for time-critical applications. We calculate physical layer latency in an ideal scenario and simulate it using a tailor-made simulation framework, based on the IEEE 802.11ad standard. Assessing data reception quality over a noisy channel yielded latency’s dependency on transmission parameters, channel noise, and digital baseband tuning. Latency in function of the modulation and coding scheme was found to span 0.28–2.71 ms in the ideal scenario, whereas simulation results also revealed its tight bond with the demapping algorithm and the number of low-density parity-check decoder iterations. The findings yielded tuning parameter combinations for reaching Pareto optimality either by constraining the bit error rate and optimizing latency or the other way around. Our assessment shows that trade-offs can and have to be made to provide sufficiently reliable low-latency communication. In good channel conditions, one may benefit from both the very high throughput and low latency; yet, in more adverse situations, lower modulation orders and additional coding overhead are a necessity.


Symmetry ◽  
2021 ◽  
Vol 13 (8) ◽  
pp. 1363
Author(s):  
Damilare Peter Oyinloye ◽  
Je Sen Teh ◽  
Norziana Jamil ◽  
Moatsum Alawida

Blockchain networks are based on cryptographic notions that include asymmetric-key encryption, hash functions and consensus protocols. Despite their popularity, mainstream protocols, such as Proof of Work or Proof of Stake still have drawbacks. Efforts to enhance these protocols led to the birth of alternative consensus protocols, catering to specific areas, such as medicine or transportation. These protocols remain relatively unknown despite having unique merits worth investigating. Although past reviews have been published on popular blockchain consensus protocols, they do not include most of these lesser-known protocols. Highlighting these alternative consensus protocols contributes toward the advancement of the state of the art, as they have design features that may be useful to academics, blockchain practitioners and researchers. In this paper, we bridge this gap by providing an overview of alternative consensus protocols proposed within the past 3 years. We evaluate their overall performance based on metrics such as throughput, scalability, security, energy consumption, and finality. In our review, we examine the trade-offs that these consensus protocols have made in their attempts to optimize scalability and performance. To the best of our knowledge, this is the first paper that focuses on these alternative protocols, highlighting their unique features that can be used to develop future consensus protocols.


mBio ◽  
2014 ◽  
Vol 5 (3) ◽  
Author(s):  
Jason T. Ladner ◽  
Brett Beitzel ◽  
Patrick S. G. Chain ◽  
Matthew G. Davenport ◽  
Eric Donaldson ◽  
...  

ABSTRACT Thanks to high-throughput sequencing technologies, genome sequencing has become a common component in nearly all aspects of viral research; thus, we are experiencing an explosion in both the number of available genome sequences and the number of institutions producing such data. However, there are currently no common standards used to convey the quality, and therefore utility, of these various genome sequences. Here, we propose five “standard” categories that encompass all stages of viral genome finishing, and we define them using simple criteria that are agnostic to the technology used for sequencing. We also provide genome finishing recommendations for various downstream applications, keeping in mind the cost-benefit trade-offs associated with different levels of finishing. Our goal is to define a common vocabulary that will allow comparison of genome quality across different research groups, sequencing platforms, and assembly techniques.


TECHNOLOGY ◽  
2018 ◽  
Vol 06 (01) ◽  
pp. 1-23 ◽  
Author(s):  
Anil B. Shrirao ◽  
Zachary Fritz ◽  
Eric M. Novik ◽  
Gabriel M. Yarmush ◽  
Rene S. Schloss ◽  
...  

Flow cytometry is an invaluable tool utilized in modern biomedical research and clinical applications requiring high throughput, high resolution particle analysis for cytometric characterization and/or sorting of cells and particles as well as for analyzing results from immunocytometric assays. In recent years, research has focused on developing microfluidic flow cytometers with the motivation of creating smaller, less expensive, simpler, and more autonomous alternatives to conventional flow cytometers. These devices could ideally be highly portable, easy to operate without extensive user training, and utilized for research purposes and/or point-of-care diagnostics especially in limited resource facilities or locations requiring on-site analyses. However, designing a device that fulfills the criteria of high throughput analysis, automation and portability, while not sacrificing performance is not a trivial matter. This review intends to present the current state of the field and provide considerations for further improvement by focusing on the key design components of microfluidic flow cytometers. The recent innovations in particle focusing and detection strategies are detailed and compared. This review outlines performance matrix parameters of flow cytometers that are interdependent with each other, suggesting trade offs in selection based on the requirements of the applications. The ongoing contribution of microfluidics demonstrates that it is a viable technology to advance the current state of flow cytometry and develop automated, easy to operate and cost-effective flow cytometers.


1978 ◽  
Vol 100 (4) ◽  
pp. 708-712 ◽  
Author(s):  
A. Bejan

The paper presents a treatment of sensible heat energy storage units as systems intended to store useful work. An analysis of the thermodynamic irreversibilities associated with storing energy from a hot gas source as sensible heat in huge liquid baths points out two important trade-offs: 1. There exists an optimum, well-defined quantity of hot gas to be used in order to maximize the useful work stored in the liquid bath. Using more than this optimum quantity in the hope of maximizing the amount of thermal energy stored as sensible heat leads to severe thermodynamics losses. 2. There exists an optimum relationship among the gas-liquid heat exchanger design parameters which minimizes the system irreversibility while maximizing its capability of storing useful work. This relationship provides a procedure for estimating the heat exchanger optimum number of transfer units (Ntu). Increasing the Ntu above the optimum in order to upgrade the heat exchanger effectiveness and the thermal energy storage capability leads to prohibitive losses due to fluid friction in the heat exchanger channels. The existence of the two optima demonstrates that designing sensible heat units for maximum thermal energy storage does not necessarily amount to thermodynamically optimizing such systems.


2020 ◽  
Author(s):  
Samuel Katz ◽  
Jian Song ◽  
Kyle P. Webb ◽  
Nicolas W. Lounsbury ◽  
Clare E. Bryant ◽  
...  

ABSTRACTComprehensive and efficient gene hit selection from high throughput assays remains a critical bottleneck in realizing the potential of genome-scale studies in biology. Widely used methods such as setting of cutoffs, prioritizing pathway enrichments, or incorporating predicted network interactions offer divergent solutions yet are associated with critical analytical trade-offs, and are often combined in an ad hoc manner. The specific limitations of these individual approaches, the lack of a systematic way by which to integrate their rankings, and the inaccessibility of complex computational approaches to many researchers, has contributed to unexpected variability and limited overlap in the reported results from comparable genome-wide studies. Using a set of three highly studied genome-wide datasets for HIV host factors that have been broadly cited for their limited number of shared candidates, we characterize the specific complementary contributions of commonly used analysis approaches and find an optimal framework by which to integrate these methods. We describe Throughput Ranking by Iterative Analysis of Genomic Enrichment (TRIAGE), an integrated, iterative approach which uses pathway and network statistical methods and publicly available databases to optimize gene prioritization. TRIAGE is accessible as a secure, rapid, user-friendly web-based application (https://triage.niaid.nih.gov).Graphical Abstract


Sign in / Sign up

Export Citation Format

Share Document