scholarly journals Evaluate trade-offs between I/sub sp/ and lifetime for a specified fuel elements state-of-the-art

1971 ◽  
Author(s):  
2021 ◽  
Vol 14 (5) ◽  
pp. 785-798
Author(s):  
Daokun Hu ◽  
Zhiwen Chen ◽  
Jianbing Wu ◽  
Jianhua Sun ◽  
Hao Chen

Persistent memory (PM) is increasingly being leveraged to build hash-based indexing structures featuring cheap persistence, high performance, and instant recovery, especially with the recent release of Intel Optane DC Persistent Memory Modules. However, most of them are evaluated on DRAM-based emulators with unreal assumptions, or focus on the evaluation of specific metrics with important properties sidestepped. Thus, it is essential to understand how well the proposed hash indexes perform on real PM and how they differentiate from each other if a wider range of performance metrics are considered. To this end, this paper provides a comprehensive evaluation of persistent hash tables. In particular, we focus on the evaluation of six state-of-the-art hash tables including Level hashing, CCEH, Dash, PCLHT, Clevel, and SOFT, with real PM hardware. Our evaluation was conducted using a unified benchmarking framework and representative workloads. Besides characterizing common performance properties, we also explore how hardware configurations (such as PM bandwidth, CPU instructions, and NUMA) affect the performance of PM-based hash tables. With our in-depth analysis, we identify design trade-offs and good paradigms in prior arts, and suggest desirable optimizations and directions for the future development of PM-based hash tables.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Samantha Caporal Del Barrio ◽  
Art Morris ◽  
Gert F. Pedersen

In today’s mobile device market, there is a strong need for efficient antenna miniaturization. Tunable antennas are a very promising way to reduce antenna volume while enlarging its operating bandwidth. MEMS tunable capacitors are state-of-the-art in terms of insertion loss. Their characteristics are used in this investigation. This paper uses field simulations to highlight the trade-offs between the design of the tuner and the design of the antenna, especially the impact of the location of the tuner and the degree of miniaturization. Codesigning the tuner and the antenna is essential to optimize radiated performance.


2020 ◽  
Vol 34 (04) ◽  
pp. 5700-5708 ◽  
Author(s):  
Jianghao Shen ◽  
Yue Wang ◽  
Pengfei Xu ◽  
Yonggan Fu ◽  
Zhangyang Wang ◽  
...  

While increasingly deep networks are still in general desired for achieving state-of-the-art performance, for many specific inputs a simpler network might already suffice. Existing works exploited this observation by learning to skip convolutional layers in an input-dependent manner. However, we argue their binary decision scheme, i.e., either fully executing or completely bypassing one layer for a specific input, can be enhanced by introducing finer-grained, “softer” decisions. We therefore propose a Dynamic Fractional Skipping (DFS) framework. The core idea of DFS is to hypothesize layer-wise quantization (to different bitwidths) as intermediate “soft” choices to be made between fully utilizing and skipping a layer. For each input, DFS dynamically assigns a bitwidth to both weights and activations of each layer, where fully executing and skipping could be viewed as two “extremes” (i.e., full bitwidth and zero bitwidth). In this way, DFS can “fractionally” exploit a layer's expressive power during input-adaptive inference, enabling finer-grained accuracy-computational cost trade-offs. It presents a unified view to link input-adaptive layer skipping and input-adaptive hybrid quantization. Extensive experimental results demonstrate the superior tradeoff between computational cost and model expressive power (accuracy) achieved by DFS. More visualizations also indicate a smooth and consistent transition in the DFS behaviors, especially the learned choices between layer skipping and different quantizations when the total computational budgets vary, validating our hypothesis that layer quantization could be viewed as intermediate variants of layer skipping. Our source code and supplementary material are available at https://github.com/Torment123/DFS.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jason W. Ostrowe

PurposeThe purpose of this state-of-the-art review is to explore the empirical literature on federal intervention of police under 42 USC Section 14141.Design/methodology/approachA five-stage scoping review of the empirical literature related to 14141 was conducted through searches of scholarly databases and gray literature.FindingsThis scoping review revealed 21 empirical studies of 14141 published between 2002 and 2020 in criminal justice, criminology, legal and gray literature. Researchers employed various methodologies and designs to study 14141 reflecting the complexity of evaluating a multistage and multi-outcome federal intervention of police. The success of 14141 to reform police agencies is mixed. The empirical evidence suggests that application of this law is fraught with trade-offs and uncertainties including de-policing, increased crime and organizational difficulties in sustaining reform. Overall, more research would assist in understanding the efficacy of this federal mechanism of police accountability and reform.Originality/valueThis review is the first synthesis of the empirical literature on 14141. In consideration of the current national police crisis, findings help illuminate both what is known about federal intervention and areas for future research.


Geophysics ◽  
1999 ◽  
Vol 64 (2) ◽  
pp. 636-640
Author(s):  
Roy E. White

This paper by Ziolkowski, Underhill, and Johnson (abbreviated below to ZUJ) opens by speaking of problems in tying seismic data to wells that have not been addressed properly. Yet it fails to present any solid evidence of the problems it claims to address and shows no clear awareness of the state of the art. The paper’s arguments are sustained only by ignoring important distinctions, degrees of approximation, necessary trade‐offs in processing, and quantitative analysis.


2021 ◽  
Vol 39 (4) ◽  
pp. 1-29
Author(s):  
Shijun Li ◽  
Wenqiang Lei ◽  
Qingyun Wu ◽  
Xiangnan He ◽  
Peng Jiang ◽  
...  

Static recommendation methods like collaborative filtering suffer from the inherent limitation of performing real-time personalization for cold-start users. Online recommendation, e.g., multi-armed bandit approach, addresses this limitation by interactively exploring user preference online and pursuing the exploration-exploitation (EE) trade-off. However, existing bandit-based methods model recommendation actions homogeneously. Specifically, they only consider the items as the arms, being incapable of handling the item attributes , which naturally provide interpretable information of user’s current demands and can effectively filter out undesired items. In this work, we consider the conversational recommendation for cold-start users, where a system can both ask the attributes from and recommend items to a user interactively. This important scenario was studied in a recent work  [54]. However, it employs a hand-crafted function to decide when to ask attributes or make recommendations. Such separate modeling of attributes and items makes the effectiveness of the system highly rely on the choice of the hand-crafted function, thus introducing fragility to the system. To address this limitation, we seamlessly unify attributes and items in the same arm space and achieve their EE trade-offs automatically using the framework of Thompson Sampling. Our Conversational Thompson Sampling (ConTS) model holistically solves all questions in conversational recommendation by choosing the arm with the maximal reward to play. Extensive experiments on three benchmark datasets show that ConTS outperforms the state-of-the-art methods Conversational UCB (ConUCB) [54] and Estimation—Action—Reflection model [27] in both metrics of success rate and average number of conversation turns.


1982 ◽  
Vol 36 (1) ◽  
pp. 43-55
Author(s):  
Patrick J. Hui

Four different signal processing techniques applicable to GPS geodetic equipment are considered in this paper. These are: pseudorange measurements, integrated Doppler counts, carrier phase measurements and interferometric measurements. Hardware requirements and error budgets are reviewed. Inherent performance limitations of each technique and design trade-offs involved in attempting to achieve the full performance potential, using state-of-the-art electronics are discussed. The above provides a basis for comparative analysis of those signal processing techniques applied to GPS geodetic equipment.


2020 ◽  
Vol 2 (1) ◽  
Author(s):  
Darius Sas ◽  
Paris Avgeriou ◽  
Ronald Kruizinga ◽  
Ruben Scheedler

AbstractThe interplay between Maintainability and Reliability can be particularly complex and different kinds of trade-offs may arise when developers try to optimise for either one of these two qualities. To further understand how Maintainability and Reliability influence each other, we perform an empirical study using architectural smells and source code file co-changes as proxies for these two qualities, respectively. The study is designed using an exploratory multiple-case case study following well-know guidelines and using fourteen open source Java projects. Three different research questions are identified and investigated through statistical analysis. Co-changes are detected by using both a state-of-the-art algorithm and a novel approach. The three architectural smells selected are among the most important from the literature and are detected using open source tools. The results show that 50% of co-changes eventually end up taking part in an architectural smell. Moreover, statistical tests indicate that in 50% of the projects, files and packages taking part in smells are more likely to co-change than non-smelly files. Finally, co-changes were also found to appear before smells 90% of the times a smell and a co-change appear in the same file pair. Our findings show that Reliability is indirectly affected by low levels of Maintainability even at the architectural level. This is because low-quality components require more frequent changes by the developers, increasing chances to eventually introduce faults.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Jannis Priesnitz ◽  
Christian Rathgeb ◽  
Nicolas Buchmann ◽  
Christoph Busch ◽  
Marian Margraf

AbstractTouchless fingerprint recognition represents a rapidly growing field of research which has been studied for more than a decade. Through a touchless acquisition process, many issues of touch-based systems are circumvented, e.g., the presence of latent fingerprints or distortions caused by pressing fingers on a sensor surface. However, touchless fingerprint recognition systems reveal new challenges. In particular, a reliable detection and focusing of a presented finger as well as an appropriate preprocessing of the acquired finger image represent the most crucial tasks. Also, further issues, e.g., interoperability between touchless and touch-based fingerprints or presentation attack detection, are currently investigated by different research groups. Many works have been proposed so far to put touchless fingerprint recognition into practice. Published approaches range from self identification scenarios with commodity devices, e.g., smartphones, to high performance on-the-move deployments paving the way for new fingerprint recognition application scenarios.This work summarizes the state-of-the-art in the field of touchless 2D fingerprint recognition at each stage of the recognition process. Additionally, technical considerations and trade-offs of the presented methods are discussed along with open issues and challenges. An overview of available research resources completes the work.


Author(s):  
Haroun Habeeb ◽  
Ankit Anand ◽  
Mausam ◽  
Parag Singla

There is a vast body of theoretical research on lifted inference in probabilistic graphical models (PGMs). However, few demonstrations exist where lifting is applied in conjunction with top of the line applied algorithms. We pursue the applicability of lifted inference for computer vision (CV), with the insight that a globally optimal (MAP) labeling will likely have the same label for two symmetric pixels. The success of our approach lies in efficiently handling a distinct unary potential on every node (pixel), typical of CV applications. This allows us to lift the large class of algorithms that model a CV problem via PGM inference. We propose a generic template for coarse-to-fine (C2F) inference in CV, which progressively refines an initial coarsely lifted PGM for varying quality-time trade-offs. We demonstrate the performance of C2F inference by developing lifted versions of two near state-of-the-art CV algorithms for stereo vision and interactive image segmentation. We find that, against flat algorithms, the lifted versions have a much superior anytime performance, without any loss in final solution quality.


Sign in / Sign up

Export Citation Format

Share Document