execution engine
Recently Published Documents


TOTAL DOCUMENTS

126
(FIVE YEARS 27)

H-INDEX

12
(FIVE YEARS 3)

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7449
Author(s):  
Min-Zheng Shieh ◽  
Yi-Bing Lin ◽  
Yin-Jui Hsu

An Internet of Things (IoT) application typically involves implementations in both the device domain and the network domain. In this two-domain environment, it is possible that application developers implement the wrong network functions and/or connect some IoT devices that should never be linked, which result in the execution of wrong operations on network functions. To resolve these issues, we propose the VerificationTalk mechanism to prevent inappropriate IoT application deployment. VerificationTalk consists of two subsystems: the BigraphTalk subsystem which verifies IoT device configuration; and AFLtalk which validates the network functions. VerificationTalk provides mechanisms to conduct online anomaly detection by using a runtime monitor and offline by using American Fuzzy Lop (AFL). The runtime monitor is capable of intercepting potentially harmful data targeting IoT devices. When VerificationTalk detects errors, it provides feedback for debugging. VerificationTalk also assists in building secure IoT applications by identifying security loopholes in network applications. By the appropriate design of the IoTtalk execution engine, the testing capacity of AFLtalk is three times that of traditional AFL approaches.


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Francisco D. B. S. Praciano ◽  
Paulo R. P. Amora ◽  
Italo C. Abreu ◽  
Francisco L. F. Pereira ◽  
Javam C. Machado

Abstract Background Database Management Systems (DBMSs) use declarative language to execute queries to stored data. The DBMS defines how data will be processed and ultimately retrieved. Therefore, it must choose the best option from the different possibilities based on an estimation process. The optimization process uses estimated cardinalities to make optimization decisions, such as choosing predicate order. Methods In this paper, we propose Robust Cardinality, an approach to calculate cardinality estimates of query operations to guide the execution engine of the DBMSs to choose the best possible form or at least avoid the worst one. By using machine learning, instead of the current histogram heuristics, it is possible to improve these estimates; hence, leading to more efficient query execution. Results We perform experimental tests using PostgreSQL, comparing both estimators and a modern technique proposed in the literature. With Robust Cardinality, a lower estimation error of a batch of queries was obtained and PostgreSQL executed these queries more efficiently than when using the default estimator. We observed a 3% reduction in execution time after reducing 4 times the query estimation error. Conclusions From the results, it is possible to conclude that this new approach results in improvements in query processing in DBMSs, especially in the generation of cardinality estimates.


Author(s):  
Basireddy Ithihas Reddy

It has been observed that there has been a great interest in computing experiments which has been useful on shared nothing computers and commodity machines. We need multiple systems running in parallel working closely together towards the same goal. Frequently it has been experienced and observed that the distributed execution engine named MapReduce handles the primary input-output workload for such clusters. There are numerous distributed file systems around viz. NTFS,ReFS,FAT,FAT32 in windows and linux, we studied them and implemented a few distributed file systems. It has been studied that distributed file systems (DFS) work very well on many small files but some do not generate expected output on large files. We implemented benchmark testing algorithms in each distributed files systems for small and large files, and the analysis is been put forward in this paper. Even we came across the various implementation issues of various DFS, they have also been mentioned in this paper.


2021 ◽  
pp. 111033
Author(s):  
Pierre-Emmanuel Hladik ◽  
Félix Ingrand ◽  
Silvano Dal Zilio ◽  
Reyyan Tekin

Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 621
Author(s):  
Giuseppe Psaila ◽  
Paolo Fosci

Internet technology and mobile technology have enabled producing and diffusing massive data sets concerning almost every aspect of day-by-day life. Remarkable examples are social media and apps for volunteered information production, as well as Open Data portals on which public administrations publish authoritative and (often) geo-referenced data sets. In this context, JSON has become the most popular standard for representing and exchanging possibly geo-referenced data sets over the Internet.Analysts, wishing to manage, integrate and cross-analyze such data sets, need a framework that allows them to access possibly remote storage systems for JSON data sets, to retrieve and query data sets by means of a unique query language (independent of the specific storage technology), by exploiting possibly-remote computational resources (such as cloud servers), comfortably working on their PC in their office, more or less unaware of real location of resources. In this paper, we present the current state of the J-CO Framework, a platform-independent and analyst-oriented software framework to manipulate and cross-analyze possibly geo-tagged JSON data sets. The paper presents the general approach behind the J-CO Framework, by illustrating the query language by means of a simple, yet non-trivial, example of geographical cross-analysis. The paper also presents the novel features introduced by the re-engineered version of the execution engine and the most recent components, i.e., the storage service for large single JSON documents and the user interface that allows analysts to comfortably share data sets and computational resources with other analysts possibly working in different places of the Earth globe. Finally, the paper reports the results of an experimental campaign, which show that the execution engine actually performs in a more than satisfactory way, proving that our framework can be actually used by analysts to process JSON data sets.


2021 ◽  
Vol 30 (2) ◽  
pp. 1-27
Author(s):  
Xiang Gao ◽  
Bo Wang ◽  
Gregory J. Duck ◽  
Ruyi Ji ◽  
Yingfei Xiong ◽  
...  

Automated program repair is an emerging technology that seeks to automatically rectify program errors and vulnerabilities. Repair techniques are driven by a correctness criterion that is often in the form of a test suite. Such test-based repair may produce overfitting patches, where the patches produced fail on tests outside the test suite driving the repair. In this work, we present a repair method that fixes program vulnerabilities without the need for a voluminous test suite. Given a vulnerability as evidenced by an exploit, the technique extracts a constraint representing the vulnerability with the help of sanitizers. The extracted constraint serves as a proof obligation that our synthesized patch should satisfy. The proof obligation is met by propagating the extracted constraint to locations that are deemed to be “suitable” fix locations. An implementation of our approach (E xtract F ix ) on top of the KLEE symbolic execution engine shows its efficacy in fixing a wide range of vulnerabilities taken from the ManyBugs benchmark, real-world CVEs and Google’s OSS-Fuzz framework. We believe that our work presents a way forward for the overfitting problem in program repair by generalizing observable hazards/vulnerabilities (as constraint) from a single failing test or exploit.


Electronics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 62
Author(s):  
Fayozbek Rustamov ◽  
Juhwan Kim ◽  
Jihyeon Yu ◽  
Hyunwook Kim ◽  
Joobeom Yun

Greybox Fuzzing is the most reliable and essentially powerful technique for automated software testing. Notwithstanding, a majority of greybox fuzzers are not effective in directed fuzzing, for example, towards complicated patches, as well as towards suspicious and critical sites. To overcome these limitations of greybox fuzzers, Directed Greybox Fuzzing (DGF) approaches were recently proposed. Current DGFs are powerful and efficient approaches that can compete with Coverage-Based Fuzzers. Nevertheless, DGFs neglect to accomplish stability between usefulness and proficiency, and random mutations make it hard to handle complex paths. To alleviate this problem, we propose an innovative methodology, a target-oriented hybrid fuzzing tool that utilizes a fuzzer and dynamic symbolic execution (also referred to as a concolic execution) engine. Our proposed method aims to generate inputs that can quickly reach the target sites in each sequence and trigger potential hard-to-reach vulnerabilities in the program binary. Specifically, to dive deep into the target binary, we designed a proposed technique named BugMiner, and to demonstrate the capability of our implementation, we evaluated it comprehensively on bug hunting and crash reproduction. Evaluation results showed that our proposed implementation could not only trigger hard-to-reach bugs 3.1, 4.3, 2.9, 2.0, 1.8, and 1.9 times faster than Hawkeye, AFLGo, AFL, AFLFast, QSYM, and ParmeSan respectively but also scale to several real-world programs.


Sign in / Sign up

Export Citation Format

Share Document