scholarly journals PICO

2021 ◽  
Vol 18 (4) ◽  
pp. 1-27
Author(s):  
Tina Jung ◽  
Fabian Ritter ◽  
Sebastian Hack

Memory safety violations such as buffer overflows are a threat to security to this day. A common solution to ensure memory safety for C is code instrumentation. However, this often causes high execution-time overhead and is therefore rarely used in production. Static analyses can reduce this overhead by proving some memory accesses in bounds at compile time. In practice, however, static analyses may fail to verify in-bounds accesses due to over-approximation. Therefore, it is important to additionally optimize the checks that reside in the program. In this article, we present PICO, an approach to eliminate and replace in-bounds checks. PICO exactly captures the spatial memory safety of accesses using Presburger formulas to either verify them statically or substitute existing checks with more efficient ones. Thereby, PICO can generate checks of which each covers multiple accesses and place them at infrequently executed locations. We evaluate our LLVM-based PICO prototype with the well-known SoftBound instrumentation on SPEC benchmarks commonly used in related work. PICO reduces the execution-time overhead introduced by SoftBound by 36% on average (and the code-size overhead by 24%). Our evaluation shows that the impact of substituting checks dominates that of removing provably redundant checks.

Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1873
Author(s):  
Iván Gamino del Río ◽  
Agustín Martínez Hellín ◽  
Óscar R. Polo ◽  
Miguel Jiménez Arribas ◽  
Pablo Parra ◽  
...  

Code instrumentation enables the observability of an embedded software system during its execution. A usage example of code instrumentation is the estimation of “worst-case execution time” using hybrid analysis. This analysis combines static code analysis with measurements of the execution time on the deployment platform. Static analysis of source code determines where to insert the tracing instructions, so that later, the execution time can be captured using a logic analyser. The main drawback of this technique is the overhead introduced by the execution of trace instructions. This paper proposes a modification of the architecture of a RISC pipelined processor that eliminates the execution time overhead introduced by the code instrumentation. In this way, it allows the tracing to be non-intrusive, since the sequence and execution times of the program under analysis are not modified by the introduction of traces. As a use case of the proposed solution, a processor, based on RISC-V architecture, was implemented using VHDL language. The processor, synthesized on a FPGA, was used to execute and evaluate a set of examples of instrumented code generated by a “worst-case execution time” estimation tool. The results validate that the proposed architecture executes the instrumented code without overhead.


Author(s):  
Vianney Kengne Tchendji ◽  
Jean Frederic Myoupo ◽  
Gilles Dequen

In this paper, the authors highlight the existence of close relations between the execution time, efficiency and number of communication rounds in a family of CGM-based parallel algorithms for the optimal binary search tree problem (OBST). In this case, these three parameters cannot be simultaneously improved. The family of CGM (Coarse Grained Multicomputer) algorithms they derive is based on Knuth's sequential solution running in time and space, where n is the size of the problem. These CGM algorithms use p processors, each with local memory. In general, the authors show that each algorithms runs in with communications rounds. is the granularity of their model, and is a parameter that depends on and . The special case of yields a load-balanced CGM-based parallel algorithm with communication rounds and execution steps. Alternately, if , they obtain another algorithm with better execution time, say , the absence of any load-balancing and communication rounds, i.e., not better than the first algorithm. The authors show that the granularity has a crucial role in the different techniques they use to partition the problem to solve and study the impact of each scheduling algorithm. To the best of their knowledge, this is the first unified method to derive a set of parameter-dependent CGM-based parallel algorithms for the OBST problem.


BMJ Open ◽  
2019 ◽  
Vol 9 (10) ◽  
pp. e030536
Author(s):  
Kanika Chaudhri ◽  
Madeleine Kearney ◽  
Richard O Day ◽  
Anthony Rodgers ◽  
Emily Atkins

IntroductionForgetting to take a medication is the most common reason for non-adherence to self-administered medication. Dose administration aids (DAAs) are a simple and common solution to improve unintentional non-adherence for oral tablets. DAAs can be in the form of compartmentalised pill boxes, automated medication dispensing devices, blister packs and sachets packets. This protocol aims to outline the methods that will be used in a systematic review of the current literature to assess the impact of DAAs on adherence to medications and health outcomes.Methods and analysisRandomised controlled trials will be identified through electronic searches in databases including EMBASE, MEDLINE, CINAHL and the Cochrane Library, from the beginning of each database until January 2020. Two reviewers will independently screen studies and extract data using the standardised forms. Data extracted will include general study information, characteristics of the study, participant characteristics, intervention characteristics and outcomes. Primary outcome is to assess the effects of DAAs on medication adherence. Secondary outcome is to evaluate the changes in health outcomes. The risk of bias will be ascertained by two reviewers in parallel using The Cochrane Risk of Bias Tool. A meta-analysis will be performed if data are homogenous.Ethics and disseminationEthics approval will not be required for this study. The results of the review described within this protocol will be disseminated through publication in a peer-reviewed journal and relevant conference presentations.PROSPERO registration numberCRD42018096087


2021 ◽  
Author(s):  
Jeanne Alcantara

Apache Spark enables a big data application—one that takes massive data as input and may produce massive data along its execution—to run in parallel on multiple nodes. Hence, for a big data application, performance is a vital issue. This project analyzes a WordCount application using Apache Spark, where the impact on the execution time and average utilization is assessed. To facilitate this assessment, the number of executor cores and the size of executor memory are varied across different sizes of data that the application has to process, and the different number of nodes in the cluster that the application runs on. It is concluded that different pairs (data size, number of nodes in the cluster) require different number of executor cores and different size of executor memory to obtain optimum results for execution time and average node utilization.


2021 ◽  
Author(s):  
Jeanne Alcantara

Apache Spark enables a big data application—one that takes massive data as input and may produce massive data along its execution—to run in parallel on multiple nodes. Hence, for a big data application, performance is a vital issue. This project analyzes a WordCount application using Apache Spark, where the impact on the execution time and average utilization is assessed. To facilitate this assessment, the number of executor cores and the size of executor memory are varied across different sizes of data that the application has to process, and the different number of nodes in the cluster that the application runs on. It is concluded that different pairs (data size, number of nodes in the cluster) require different number of executor cores and different size of executor memory to obtain optimum results for execution time and average node utilization.


Sign in / Sign up

Export Citation Format

Share Document