scholarly journals Toward Taming the Overhead Monster for Data-flow Integrity

2022 ◽  
Vol 27 (3) ◽  
pp. 1-24
Author(s):  
Lang Feng ◽  
Jiayi Huang ◽  
Jeff Huang ◽  
Jiang Hu

Data-Flow Integrity (DFI) is a well-known approach to effectively detecting a wide range of software attacks. However, its real-world application has been quite limited so far because of the prohibitive performance overhead it incurs. Moreover, the overhead is enormously difficult to overcome without substantially lowering the DFI criterion. In this work, an analysis is performed to understand the main factors contributing to the overhead. Accordingly, a hardware-assisted parallel approach is proposed to tackle the overhead challenge. Simulations on SPEC CPU 2006 benchmark show that the proposed approach can completely enforce the DFI defined in the original seminal work while reducing performance overhead by 4×, on average.

2001 ◽  
Vol 7 (S2) ◽  
pp. 522-523
Author(s):  
W. Probst ◽  
G. Benner ◽  
B. Kabius ◽  
G. Lang ◽  
S. Hiller ◽  
...  

Transmission electron microscopes have been built along with and guided by technological opportunities since the last five decades. Even though there are some “workhorse” type of microscopes, these instruments are still more or less built from the technological viewpoint and less from the viewpoint of ease of use in a wide range of applications. On the other hand, leading edge applications are the drivers for the development and the use of leading edge technology. The result then is a “race horse” which is of very limited benefit in “Real world”.During the last decade computers have been integrated to build microscope systems. in most cases, however, computers still have to deal with obsolete electron optical ray path designs and thus, have to be used more to overcome the problems of imperfect optics and bad design of ray paths than to provide optimized “Real world” capabilities.


Author(s):  
Xin Guo ◽  
Boyuan Pan ◽  
Deng Cai ◽  
Xiaofei He

Low rank matrix factorizations(LRMF) have attracted much attention due to its wide range of applications in computer vision, such as image impainting and video denoising. Most of the existing methods assume that the loss between an observed measurement matrix and its bilinear factorization follows symmetric distribution, like gaussian or gamma families. However, in real-world situations, this assumption is often found too idealized, because pictures under various illumination and angles may suffer from multi-peaks, asymmetric and irregular noises. To address these problems, this paper assumes that the loss follows a mixture of Asymmetric Laplace distributions and proposes robust Asymmetric Laplace Adaptive Matrix Factorization model(ALAMF) under bayesian matrix factorization framework. The assumption of Laplace distribution makes our model more robust and the asymmetric attribute makes our model more flexible and adaptable to real-world noise. A variational method is then devised for model inference. We compare ALAMF with other state-of-the-art matrix factorization methods both on data sets ranging from synthetic and real-world application. The experimental results demonstrate the effectiveness of our proposed approach.


2012 ◽  
Vol 106 (3) ◽  
pp. 206-211 ◽  
Author(s):  
Laurie H. Rubel ◽  
Michael Driskill ◽  
Lawrence M. Lesser

Redistricting can provide a real-world application for use in a wide range of mathematics classrooms.


Author(s):  
Christoph Kamann ◽  
Carsten Rother

Abstract When designing a semantic segmentation model for a real-world application, such as autonomous driving, it is crucial to understand the robustness of the network with respect to a wide range of image corruptions. While there are recent robustness studies for full-image classification, we are the first to present an exhaustive study for semantic segmentation, based on many established neural network architectures. We utilize almost 400,000 images generated from the Cityscapes dataset, PASCAL VOC 2012, and ADE20K. Based on the benchmark study, we gain several new insights. Firstly, many networks perform well with respect to real-world image corruptions, such as a realistic PSF blur. Secondly, some architecture properties significantly affect robustness, such as a Dense Prediction Cell, designed to maximize performance on clean data only. Thirdly, the generalization capability of semantic segmentation models depends strongly on the type of image corruption. Models generalize well for image noise and image blur, however, not with respect to digitally corrupted data or weather corruptions.


2012 ◽  
Author(s):  
Kelly Dyjak Leblanc ◽  
Caitlin Femac ◽  
Craig N. Shealy ◽  
Renee Staton ◽  
Lee G. Sternberger

2002 ◽  
Author(s):  
Janel H. Rogers ◽  
Heather M. Ooak ◽  
Ronald A. Moorre ◽  
M. G. Averett ◽  
Jeffrey G. Morrison

Author(s):  
Dilpreet Singh Brar ◽  
Amit Kumar ◽  
Pallavi ◽  
Usha Mittal ◽  
Pooja Rana

Sign in / Sign up

Export Citation Format

Share Document