kernel module
Recently Published Documents


TOTAL DOCUMENTS

46
(FIVE YEARS 10)

H-INDEX

5
(FIVE YEARS 2)

Author(s):  
Muhammad Ejaz Sandhu

To test the behavior of the Linux kernel module, device drivers and file system in a faulty situation, scientists tried to inject faults in different artificial environments. Since the rarity and unpredictability of such events are pretty high, thus the localization and detection of Linux kernel, device drivers, file system modules errors become unfathomable. ‘Artificial introduction of some random faults during normal tests’ is the only known approach to such mystifying problems. A standard method for performing such experiments is to generate synthetic faults and study the effects. Various fault injection frameworks have been analyzed over the Linux kernel to simulate such detection. The following paper highlights the comparison of different approaches and techniques used for such fault injection to test Linux kernel modules that include simulating low resource conditions and detecting memory leaks. The frameworks chosen to be used in these experiments are; Linux Text Project (LTP), KEDR, Linux Fault-Injection (LFI), and SCSI. 


2021 ◽  
Vol 11 (18) ◽  
pp. 8379
Author(s):  
Seongmin Kim

A recent innovation in the trusted execution environment (TEE) technologies enables the delegation of privacy-preserving computation to the cloud system. In particular, Intel SGX, an extension of x86 instruction set architecture (ISA), accelerates this trend by offering hardware-protected isolation with near-native performance. However, SGX inherently suffers from performance degradation depending on the workload characteristics due to the hardware restriction and design decisions that primarily concern the security guarantee. The system-level optimizations on SGX runtime and kernel module have been proposed to resolve this, but they cannot effectively reflect application-specific characteristics that largely impact the performance of legacy SGX applications. This work presents an optimization strategy to achieve application-level optimization by utilizing asynchronous switchless calls to reduce enclave transition, one of the dominant overheads of using SGX. Based on the systematic analysis, our methodology examines the performance benefit for each enclave transition wrapper and selectively applies switchless calls without modifying the legacy codebases. The evaluation shows that our optimization strategy successfully improves the end-to-end performance of our showcasing application, an SGX-enabled network middlebox.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1810
Author(s):  
Xin Sun ◽  
Hongwei Luo ◽  
Guihua Liu ◽  
Chunmei Chen ◽  
Feng Xu

In order to remove the strong noise with complex shapes and high density in nuclear radiation scenes, a lightweight network composed of a Noise Learning Unit (NLU) and Texture Learning Unit (TLU) was designed. The NLU is bilinearly composed of a Multi-scale Kernel Module (MKM) and a Residual Module (RM), which learn non-local information and high-level features, respectively. Both the MKM and RM have receptive field blocks and attention blocks to enlarge receptive fields and enhance features. The TLU is at the bottom of the NLU and learns textures through an independent loss. The entire network adopts a Mish activation function and asymmetric convolutions to improve the overall performance. Compared with 12 denoising methods on our nuclear radiation dataset, the proposed method has the fewest model parameters, the highest quantitative metrics, and the best perceptual satisfaction, indicating its high denoising efficiency and rich texture retention.


2020 ◽  
Author(s):  
Sarah Wunderlich ◽  
Markus Ring ◽  
Dieter Landes ◽  
Andreas Hotho

Abstract Over the years, artificial neural networks have been applied successfully in many areas including IT security. Yet, neural networks can only process continuous input data. This is particularly challenging for security-related, non-continuous data like system calls of an operating system. This work focuses on five different options to preprocess sequences of system calls so that they can be processed by neural networks. These input options are based on one-hot encodings and learning word2vec, GloVe or fastText representations of system calls. As an additional option, we analyse if mapping system calls to their respective kernel modules is an adequate generalization step for (i) replacing system calls or (ii) enhancing system call data with additional information regarding their context. When performing such preprocessing steps it is important to ensure that no relevant information is lost during the process. The overall objective of system call analysis in the context of IT security is to categorize a sequence of them as benign or malicious behavior. Therefore, this scenario is used to evaluate different system call representations in a classification task. Results indicate that a broader range of attacks can be detected when enriching system call representations with corresponding kernel module information. Prior learning of embeddings does not achieve significant improvements. This work is an extension of the work by Wunderlich et al. [1] published in Advances in Intelligent Systems and Computing (AISC, volume 951).


Author(s):  
Yunlan Du ◽  
Zhenyu Ning ◽  
Jun Xu ◽  
Zhilong Wang ◽  
Yueh-Hsun Lin ◽  
...  
Keyword(s):  

2019 ◽  
Vol 9 (22) ◽  
pp. 4813 ◽  
Author(s):  
Hanbo Yang ◽  
Fei Zhao ◽  
Gedong Jiang ◽  
Zheng Sun ◽  
Xuesong Mei

Remaining useful life (RUL) prediction is a challenging research task in prognostics and receives extensive attention from academia to industry. This paper proposes a novel deep convolutional neural network (CNN) for RUL prediction. Unlike health indicator-based methods which require the long-term tracking of sensor data from the initial stage, the proposed network aims to utilize data from consecutive time samples at any time interval for RUL prediction. Additionally, a new kernel module for prognostics is designed where the kernels are selected automatically, which can further enhance the feature extraction ability of the network. The effectiveness of the proposed network is validated using the C-MAPSS dataset for aircraft engines provided by NASA. Compared with the state-of-the-art results on the same dataset, the prediction results demonstrate the superiority of the proposed network.


Electronics ◽  
2019 ◽  
Vol 8 (9) ◽  
pp. 1021 ◽  
Author(s):  
Adam

Light emitting diodes (LEDs) as an efficient low-consumption lighting technology are being used increasingly in many applications. The move to LED lighting is also changing the way the lighting control systems are designed. Currently, most electronic ballasts and other digital lighting devices implement the Digital Addressable Lighting Interface (DALI) standard. This paper presents a low-cost, low-power effective DALI LED driver controller, based on open-source Raspberry Pi3 microcontroller prototyping platform. The control software is developed as a Linux kernel module under UBUNTU 18.04.2 LTS patched with PREEMPT_RT (Preemptive Real-time) for real-time processing. This dynamically loaded kernel module performs all the processing, communication and control operations of the Raspberry Pi3-based DALI controller with the DALI LED driver and LED luminaire. Software applications written in C and Python were developed for performance testing purposes. The experimental results showed that the proposed system could efficiently and effectively manage DALI LED drivers and perform lighting operations (e.g. dimming). The system can be used for a variety of purposes from personal lighting control needs and experimental research in control of electronic ballasts and other control gears, devices and sensors, to advanced requirements in professional buildings, including energy management, lighting maintenance and usage.


Sign in / Sign up

Export Citation Format

Share Document