software execution
Recently Published Documents


TOTAL DOCUMENTS

105
(FIVE YEARS 24)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol 18 ◽  
pp. 192-198
Author(s):  
Meili Dai

With the increasingly frequent international exchanges, English has become a common language for communication between countries. Under this research background, in order to correct students’ wrong English pronunciation, an intelligent correction system for students’ English pronunciation errors based on speech recognition technology is designed. In order to provide a relatively stable hardware correction platform for voice data information, the sensor equipment is optimized and combined with the processor and intelligent correction circuit. On this basis, the MLP (Multilayer Perceptron) error correction function is defined, with the help of the known recognition confusion calculation results, the actual input speech error is processed by gain mismatch, and the software execution environment of the system is built. Combined with the related hardware structure, the intelligent correction system of students’ English pronunciation error based on speech recognition technology is successfully applied, and the comparative experiment is designed the practical application value of the system is highlighted.


Electronics ◽  
2021 ◽  
Vol 10 (20) ◽  
pp. 2491
Author(s):  
Wooyeol Yang ◽  
Yongsu Park

Malware and ransomware are often encrypted to protect their own code, making it challenging to apply reverse engineering to analyze them. Recently, various studies have been underway to identify cryptography algorithms in malware or ransomware that use anti-reversing technology via deep-learning technology. In particular, CNNs (convolution neural networks) are deep-learning algorithms with superior performance, as compared to existing machine-learning algorithms in image classification. In the cases of malicious files to which anti-debugging techniques or anti-DBI (dynamic binary instrumentation) techniques are applied, if the traces are extracted using various debuggers or DBI, the traces are cut off due to these techniques. The IPT (Intel processor trace) has the advantage of extracting an accurate trace of a program by bypassing the anti-debugging or anti-DBI technique. This paper presents a novel method by which to identify the symmetric-key algorithms by applying a CNN to the traces extracted from the IPT. The IPT minimally interrupts software execution. First, the trace encrypted by the symmetric-key algorithms is extracted using the IPT. Then it is converted into an image to be an input into the CNN. The experiments were carried out with two different datasets. The first dataset contained traces extracted by different types of symmetric-key algorithms, and the training results were classified into nine classes with 100% accuracy. The second dataset contained traces that included the various bit sizes of the security keys and the block-cipher modes for each type of symmetric-key algorithm. Training results were classified into 36 classes with an accuracy of 70.55%. While previous studies have identified the types of encryption algorithms, this study employed a CNN to identify the number of key bits and the block-cipher modes as well.


2021 ◽  
Vol 2021 ◽  
pp. 1-22
Author(s):  
Yahui Tang ◽  
Tong Li ◽  
Rui Zhu ◽  
Fei Du ◽  
Jishu Wang ◽  
...  

Software is rapidly evolving and operates in a changing environment; therefore, in addition to software design and testing, it is essential to observe and understand the software execution behavior by modeling data recorded during the execution of the software to improve its reliability. The nested call relationship between methods during the execution of software is common, but most process-mining methods are unable to discover them, only generating flat models with low fitness. Meanwhile, it is easy to generate “spaghetti-like” models with low comprehensibility when dealing with complex software execution data. This paper proposes a component-based hierarchical software behavior model discovery method that can discover hierarchical nested call structures during software runtime, improving the fitness of the model; meanwhile, the proposed method partitions the discovery model into several parts by component information to improve the comprehensibility of the model, which can also reflect the interaction behavior within and between components. The proposed approach was implemented in a process mining toolkit. Using real-life software event logs and public datasets, we demonstrated that compared with other advanced process mining techniques, our approach can visualize actual software execution behavior in a more accurate and easy-to-understand way while balancing time performance.


2021 ◽  
Vol 2021 ◽  
pp. 1-21
Author(s):  
Đani Vladislavić ◽  
Darko Huljenić ◽  
Julije Ožegović

Network function virtualization (NFV) is a concept aimed at achieving telecom grade cloud ecosystem for new-generation networks focusing on capital and operational expenditure (CAPEX and OPEX) savings. This study introduces empirical throughput prediction model for the virtual network function (VNF) and network function virtualization infrastructure (NFVI) architectures based on Linux kernel. The model arises from the methodology for performance evaluation and modeling based on execution area (EA) distribution by CPU core pinning. EA is defined as a software execution unit that can run isolated on a compute resource (CPU core). EAs are derived from the elements and packet processing principles in NFVIs and VNFs based on Linux kernel. Performing measurements and observing linearity of the measured results open the possibility to apply model calibration technique to achieve general VNF and NFVI architecture model with performance prediction and environment setup optimization. The modeling parameters are derived from the cumulative packet processing cost obtained by measurements for collocated EAs on the CPU core hosting the bottleneck EA. The VNF and NFVI architecture model with performance prediction is successfully validated against the measurement results obtained in emulated environment and used to predict optimal system configurations and maximal throughput results for different CPUs.


2021 ◽  
Vol 11 (16) ◽  
pp. 7379
Author(s):  
Oleg Bystrov ◽  
Ruslan Pacevič ◽  
Arnas Kačeniauskas

The pervasive use of cloud computing has led to many concerns, such as performance challenges in communication- and computation-intensive services on virtual cloud resources. Most evaluations of the infrastructural overhead are based on standard benchmarks. Therefore, the impact of communication issues and infrastructure services on the performance of parallel MPI-based computations remains unclear. This paper presents the performance analysis of communication- and computation-intensive software based on the discrete element method, which is deployed as a service (SaaS) on the OpenStack cloud. The performance measured on KVM-based virtual machines and Docker containers of the OpenStack cloud is compared with that obtained by using native hardware. The improved mapping of computations to multicore resources reduced the internode MPI communication by 34.4% and increased the parallel efficiency from 0.67 to 0.78, which shows the importance of communication issues. Increasing the number of parallel processes, the overhead of the cloud infrastructure increased to 13.7% and 11.2% of the software execution time on native hardware in the case of the Docker containers and KVM-based virtual machines of the OpenStack cloud, respectively. The observed overhead was mainly caused by OpenStack service processes that increased the load imbalance of parallel MPI-based SaaS.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 1011
Author(s):  
Iman Kohyarnejadfard ◽  
Daniel Aloise ◽  
Michel R. Dagenais ◽  
Mahsa Shakeri

Advances in technology and computing power have led to the emergence of complex and large-scale software architectures in recent years. However, they are prone to performance anomalies due to various reasons, including software bugs, hardware failures, and resource contentions. Performance metrics represent the average load on the system and do not help discover the cause of the problem if abnormal behavior occurs during software execution. Consequently, system experts have to examine a massive amount of low-level tracing data to determine the cause of a performance issue. In this work, we propose an anomaly detection framework that reduces troubleshooting time, besides guiding developers to discover performance problems by highlighting anomalous parts in trace data. Our framework works by collecting streams of system calls during the execution of a process using the Linux Trace Toolkit Next Generation(LTTng), sending them to a machine learning module that reveals anomalous subsequences of system calls based on their execution times and frequency. Extensive experiments on real datasets from two different applications (e.g., MySQL and Chrome), for varying scenarios in terms of available labeled data, demonstrate the effectiveness of our approach to distinguish normal sequences from abnormal ones.


GigaScience ◽  
2021 ◽  
Vol 10 (6) ◽  
Author(s):  
Jan Christian Kässens ◽  
Lars Wienbrandt ◽  
David Ellinghaus

Abstract Background Genome-wide association studies (GWAS) and phenome-wide association studies (PheWAS) involving 1 million GWAS samples from dozens of population-based biobanks present a considerable computational challenge and are carried out by large scientific groups under great expenditure of time and personnel. Automating these processes requires highly efficient and scalable methods and software, but so far there is no workflow solution to easily process 1 million GWAS samples. Results Here we present BIGwas, a portable, fully automated quality control and association testing pipeline for large-scale binary and quantitative trait GWAS data provided by biobank resources. By using Nextflow workflow and Singularity software container technology, BIGwas performs resource-efficient and reproducible analyses on a local computer or any high-performance compute (HPC) system with just 1 command, with no need to manually install a software execution environment or various software packages. For a single-command GWAS analysis with 974,818 individuals and 92 million genetic markers, BIGwas takes ∼16 days on a small HPC system with only 7 compute nodes to perform a complete GWAS QC and association analysis protocol. Our dynamic parallelization approach enables shorter runtimes for large HPCs. Conclusions Researchers without extensive bioinformatics knowledge and with few computer resources can use BIGwas to perform multi-cohort GWAS with 1 million GWAS samples and, if desired, use it to build their own (genome-wide) PheWAS resource. BIGwas is freely available for download from http://github.com/ikmb/gwas-qc and http://github.com/ikmb/gwas-assoc.


2021 ◽  
Vol 116 ◽  
pp. 102047
Author(s):  
Imanol Allende ◽  
Nicholas Mc Guire ◽  
Jon Perez ◽  
Lisandro G. Monsalve ◽  
Roman Obermaisser

Author(s):  
Kelly Maggs ◽  
Vanessa Robins

Fuzzing is a systematic large-scale search for software vulnerabilities achieved by feeding a sequence of randomly mutated input files to the program of interest with the goal being to induce a crash. The information about inputs, software execution traces, and induced call stacks (crashes) can be used to pinpoint and fix errors in the code or exploited as a means to damage an adversary’s computer software. In black box fuzzing, the primary unit of information is the call stack: a list of nested function calls and line numbers that report what the code was executing at the time it crashed. The source code is not always available in practice, and in some situations even the function names are deliberately obfuscated (i.e., removed or given generic names). We define a topological object called the call-stack topology to capture the relationships between module names, function names and line numbers in a set of call stacks obtained via black-box fuzzing. In a proof-of-concept study, we show that structural properties of this object in combination with two elementary heuristics allow us to build a logistic regression model to predict the locations of distinct function names over a set of call stacks. We show that this model can extract function name locations with around 80% precision in data obtained from fuzzing studies of various linux programs. This has the potential to benefit software vulnerability experts by increasing their ability to read and compare call stacks more efficiently.


Sign in / Sign up

Export Citation Format

Share Document