Babel

2016 ◽  
Vol 13 (4) ◽  
pp. 36-53 ◽  
Author(s):  
Moisés Quezada-Naquid ◽  
Ricardo Marcelín-Jiménez ◽  
José Luis González-Compeán

The Babel File System is a dependable, scalable and flexible storage system. Among its main features the authors underline the availability of different types of data redundancy, a careful decoupling between data and metadata, a middleware that enforces metadata consistency, and its own load-balance and allocation procedure which adapts to the number and capacities of the supporting storage devices. It can be deployed over different hardware platforms, including commodity hardware. The authors' proposal has been designed to allow developers to settle a trade-off between price and performance, depending on their particular applications.

2018 ◽  
Vol 10 (8) ◽  
pp. 73
Author(s):  
Jianjun Lei ◽  
Jiarui Tao ◽  
Shanshan Yang

Regarding access point (AP) overload and performance anomaly which is caused by mobile terminals with different bitrates, a joint AP association and bandwidth allocation optimization algorithm is presented in this paper. Meanwhile, load balancing and proportional fairness are analyzed and formulated as an optimization model. Then, we present a Fair Bandwidth Allocation algorithm based on clients’ Business Priority (FBA-BP), which allocates bandwidth based on the bandwidth demand of clients and their business priority. Furthermore, we propose a Categorized AP Association algorithm based on clients’ demands (CAA-BD), which classifies APs by different types of clients and chooses an optimal associating AP for a new client according to AP categories and the aggregated demand transmission time that are calculated by the FBA-BP algorithm. The CAA-BD can achieve load balance and solve the performance anomaly caused by multi-rate clients coexisting. The simulation results show that our proposed algorithm obtains significant performance in terms of AP utilization, throughput, transmission delay and channel fairness in different client density levels compared with the categorized and Strong Signal First (SSF) algorithms.


2010 ◽  
Vol 96 (3) ◽  
pp. 8-15 ◽  
Author(s):  
Elizabeth S. Grace ◽  
Elizabeth J. Korinek ◽  
Zung V. Tran

ABSTRACT This study compares key characteristics and performance of physicians referred to a clinical competence assessment and education program by state medical boards (boards) and hospitals. Physicians referred by boards (400) and by hospitals (102) completed a CPEP clinical competence assessment between July 2002 and June 2010. Key characteristics, self-reported specialty, and average performance rating for each group are reported and compared. Results show that, compared with hospital-referred physicians, board-referred physicians were more likely to be male (75.5% versus 88.3%), older (average age 54.1 versus 50.3 years), and less likely to be currently specialty board certified (80.4% versus 61.8%). On a scale of 1 (best) to 4 (worst), average performance was 2.62 for board referrals and 2.36 for hospital referrals. There were no significant differences between board and hospital referrals in the percentage of physicians who graduated from U.S. and Canadian medical schools. The most common specialties referred differed for boards and hospitals. Conclusion: Characteristics of physicians referred to a clinical competence program by boards and hospitals differ in important respects. The authors consider the potential reasons for these differences and whether boards and hospitals are dealing with different subsets of physicians with different types of performance problems. Further study is warranted.


2020 ◽  
Vol 14 ◽  
Author(s):  
Khoirom Motilal Singh ◽  
Laiphrakpam Dolendro Singh ◽  
Themrichon Tuithung

Background: Data which are in the form of text, audio, image and video are used everywhere in our modern scientific world. These data are stored in physical storage, cloud storage and other storage devices. Some of it are very sensitive and requires efficient security while storing as well as in transmitting from the sender to the receiver. Objective: With the increase in data transfer operation, enough space is also required to store these data. Many researchers have been working to develop different encryption schemes, yet there exist many limitations in their works. There is always a need for encryption schemes with smaller cipher data, faster execution time and low computation cost. Methods: A text encryption based on Huffman coding and ElGamal cryptosystem is proposed. Initially, the text data is converted to its corresponding binary bits using Huffman coding. Next, the binary bits are grouped and again converted into large integer values which will be used as the input for the ElGamal cryptosystem. Results: Encryption and Decryption are successfully performed where the data size is reduced using Huffman coding and advance security with the smaller key size is provided by the ElGamal cryptosystem. Conclusion: Simulation results and performance analysis specifies that our encryption algorithm is better than the existing algorithms under consideration.


Author(s):  
Steven Bernstein

This commentary discusses three challenges for the promising and ambitious research agenda outlined in the volume. First, it interrogates the volume’s attempts to differentiate political communities of legitimation, which may vary widely in composition, power, and relevance across institutions and geographies, with important implications not only for who matters, but also for what gets legitimated, and with what consequences. Second, it examines avenues to overcome possible trade-offs from gains in empirical tractability achieved through the volume’s focus on actor beliefs and strategies. One such trade-off is less attention to evolving norms and cultural factors that may underpin actors’ expectations about what legitimacy requires. Third, it addresses the challenge of theory building that can link legitimacy sources, (de)legitimation practices, audiences, and consequences of legitimacy across different types of institutions.


Author(s):  
Mohammad Rizk Assaf ◽  
Abdel-Nasser Assimi

In this article, the authors investigate the enhanced two stage MMSE (TS-MMSE) equalizer in bit-interleaved coded FBMC/OQAM system which gives a tradeoff between complexity and performance, since error correcting codes limits error propagation, so this allows the equalizer to remove not only ICI but also ISI in the second stage. The proposed equalizer has shown less design complexity compared to the other MMSE equalizers. The obtained results show that the probability of error is improved where SNR gain reaches 2 dB measured at BER compared with ICI cancellation for different types of modulation schemes and ITU Vehicular B channel model. Some simulation results are provided to illustrate the effectiveness of the proposed equalizer.


2021 ◽  
Vol 18 (2) ◽  
pp. 1-24
Author(s):  
Nhut-Minh Ho ◽  
Himeshi De silva ◽  
Weng-Fai Wong

This article presents GRAM (<underline>G</underline>PU-based <underline>R</underline>untime <underline>A</underline>daption for <underline>M</underline>ixed-precision) a framework for the effective use of mixed precision arithmetic for CUDA programs. Our method provides a fine-grain tradeoff between output error and performance. It can create many variants that satisfy different accuracy requirements by assigning different groups of threads to different precision levels adaptively at runtime . To widen the range of applications that can benefit from its approximation, GRAM comes with an optional half-precision approximate math library. Using GRAM, we can trade off precision for any performance improvement of up to 540%, depending on the application and accuracy requirement.


2021 ◽  
Vol 34 (5) ◽  
pp. 303-318
Author(s):  
Maarten Baele ◽  
An Vermeulen ◽  
Dimitri Adons ◽  
Roos Peeters ◽  
Angelique Vandemoortele ◽  
...  

2021 ◽  
Vol 10 (1) ◽  
pp. 20
Author(s):  
Walter Tiberti ◽  
Dajana Cassioli ◽  
Antinisca Di Marco ◽  
Luigi Pomante ◽  
Marco Santic

Advances in technology call for a parallel evolution in the software. New techniques are needed to support this dynamism, to track and guide its evolution process. This applies especially in the field of embedded systems, and certainly in Wireless Sensor Networks (WSNs), where hardware platforms and software environments change very quickly. Commonly, operating systems play a key role in the development process of any application. The most used operating system in WSNs is TinyOS, currently at its TinyOS 2.1.2 version. The evolution from TinyOS 1.x and TinyOS 2.x made the applications developed on TinyOS 1.x obsolete. In other words, these applications are not compatible out-of-the-box with TinyOS 2.x and require a porting action. In this paper, we discuss on the porting of embedded system (i.e., Wireless Sensor Networks) applications in response to operating systems’ evolution. In particular, using a model-based approach, we report the porting we did of Agilla, a Mobile-Agent Middleware (MAMW) for WSNs, on TinyOS 2.x, which we refer to as Agilla 2. We also provide a comparative analysis about the characteristics of Agilla 2 versus Agilla. The proposed Agilla 2 is compatible with TinyOS 2.x, has full capabilities and provides new features, as shown by the maintainability and performance measurement presented in this paper. An additional valuable result is the architectural modeling of Agilla and Agilla 2, missing before, which extends its documentation and improves its maintainability.


Sign in / Sign up

Export Citation Format

Share Document